├── README.md ├── backend ├── Dockerfile ├── README.md ├── api_server │ ├── routes.py │ └── ws_router.py ├── common │ └── logger.py ├── core │ ├── task_dispatcher.py │ ├── task_manager.py │ ├── task_stats.py │ └── worker_manager.py ├── main.py ├── proxy │ ├── proxy_manager.py │ └── utils.py ├── requirements.txt ├── schemas │ └── task.py └── user │ └── users_manager.py ├── client ├── Dockerfile ├── common │ └── logger.py ├── core │ ├── system_resources.py │ └── ws_client.py ├── docker-compose.yml ├── framework │ └── solver_core.py ├── requirements.txt ├── run_client.py ├── task_handlers │ ├── AntiTurnstileTaskProxyLess.py │ ├── HcaptchaCracker.py │ └── test.py └── test.py ├── docker-compose.yml ├── frontend ├── Dockerfile ├── README.md ├── common │ └── logger.py ├── nginx.conf ├── package-lock.json ├── package.json ├── public │ └── index.html └── src │ ├── App.jsx │ ├── api.js │ ├── components │ ├── TaskTable.jsx │ └── WorkerTable.jsx │ ├── index.js │ └── pages │ └── Login.jsx ├── img.png ├── install_server_and_frontend.sh └── tmp ├── nginx.conf.template └── nginx.ssl.template /README.md: -------------------------------------------------------------------------------- 1 | # 🧠 brush-captcha - 自建打码平台 2 | 3 | 本项目基于 Camoufox 指纹伪装方案,包含前端页面、后端 API 服务和分布式打码客户端,支持多实例并发运行。 4 | 5 | 目前支持打码类型:`Turnstile`、`hCaptcha`,后续将根据项目需要集成更多类型打码。 6 | 7 | --- 8 | 9 | ## 📦 版本记录(Release History) 10 | 11 | | 版本号 | 日期 | 更新内容 | 备注 | 12 | |--------|------|----------|------| 13 | | v1.1.0 | 2025-04-18 | 1. 使用静态代理打码,由 server 端统一调度
2. 增加 server 鉴权机制
3. 支持 hCaptcha(Gemini 打码)
4. 优化 Turnstile 性能
5. 任务调度优化 | 关键功能升级 | 14 | | v1.0.0 | - | 🎉 首个正式版本发布,仅支持 Turnstile 和动态 IP 打码 | 初始稳定版本 | 15 | 16 | --- 17 | 18 | ## ✅ 验证码支持情况一览 19 | 20 | | 验证码类型 | 支持状态 | 说明 | 21 | |--------------------|----------|------| 22 | | `Turnstile` | ✅ 支持 | - | 23 | | `hCaptcha` | ✅ 支持 | 集成 [QIN2DIM/hcaptcha-challenger](https://github.com/QIN2DIM/hcaptcha-challenger),需 Gemini API Key(性能待优化) | 24 | | `ReCaptchaV2` | 🚧 计划支持 | 当前仅支持 Turnstile | 25 | | `ReCaptchaV3` | 🚧 计划支持 | 需模拟用户行为,未集成 | 26 | | `FunCaptcha` | ❌ 不支持 | 结构复杂,暂不支持 | 27 | | `Geetest` | ❌ 不支持 | 无交互组件模拟逻辑 | 28 | | `ImageToText` | ❌ 不支持 | 不处理纯打码图片 | 29 | | `RotateCaptcha` | ❌ 不支持 | 需模拟旋转交互 | 30 | | `SlideCaptcha` | ❌ 不支持 | 缺乏滑动行为模拟 | 31 | 32 | --- 33 | 34 | ## 📁 项目结构 35 | 36 | ```bash 37 | . 38 | ├── backend/ # Server:提供鉴权、任务调度与 WebSocket 服务 39 | ├── frontend/ # 管理页面:React + Ant Design 40 | ├── client/ # 分布式打码客户端(基于 Camoufox) 41 | ├── docker-compose.yml 42 | └── README.md 43 | ``` 44 | 45 | --- 46 | 47 | ## 🖼 效果预览 48 | 49 | ![img.png](img.png) 50 | 51 | --- 52 | 53 | ## 🚀 快速开始 54 | 55 | ### 🔧 环境准备 56 | 57 | - Python 3.11+ 58 | - Node.js 20+ 59 | - Docker & Docker Compose 60 | - Nginx(建议部署 SSL) 61 | - Gemini API Key(如使用 hCaptcha) 62 | 63 | --- 64 | 65 | ### 🖥 部署server端与前端页面 66 | 67 | ```bash 68 | git clone https://github.com/Brush-Bot/brush-captcha.git 69 | cd brush-captcha 70 | ``` 71 | 72 | 将以下配置文件放入 `tmp/` 目录: 73 | 74 | - `proxies.txt`:代理 IP 列表,一行一条,支持多种格式(包括 user:pass@ip:port) 75 | - `user_keys.txt`:用户 Key 列表,可直接填写 Gemini API Key 76 | - `*.crt / *.key / *.pem`:SSL 证书(支持自动拆分识别) 77 | - `nginx.conf.template / nginx.ssl.template`:Nginx 模板 78 | 79 | ⚡ 示例自签证书命令: 80 | 81 | ```bash 82 | openssl req -x509 -newkey rsa:2048 -nodes \ 83 | -keyout server.key -out server.crt -days 365 \ 84 | -subj "/C=CN/ST=Beijing/L=Beijing/O=MyCompany/OU=Dev/CN=localhost" 85 | ``` 86 | 87 | 执行安装脚本: 88 | 89 | ```bash 90 | bash install_server_and_frontend.sh 91 | ``` 92 | 93 | --- 94 | 95 | ### 🧑‍💻 部署client端 96 | 97 | ```bash 98 | cd client 99 | nano config.yaml 100 | ``` 101 | 102 | 示例配置: 103 | 104 | ```yaml 105 | # 并发数设置(可选,不填则自动根据系统资源计算) 106 | concurrency: null 107 | 108 | # Camoufox 参数配置 109 | camoufox: 110 | # 当前设备支持的打码类型,支持的类型server端才会分配任务 111 | solver_type: 112 | - HcaptchaCracker 113 | - AntiTurnstileTaskProxyLess 114 | # 无头模式,默认打开即可 115 | headless: "true" 116 | 117 | worker: 118 | # 当前设备名称 119 | name: "test" 120 | # 后端api地址,替换ip和port即可;如果没有配置ssl,协议头改成ws;切记不能用127.0.0.1和localhost 121 | wss_url: "wss://ip:8080/ws/worker/" 122 | ``` 123 | 124 | 启动: 125 | 126 | ```bash 127 | docker compose up -d 128 | ``` 129 | 130 | > 📌 注意事项: 131 | > 1. 镜像较大,首次构建需耐心等待; 132 | > 2. 请确保代理连通性(科学上网); 133 | > 3. 默认监控地址:http://{ip}:8080(默认账号密码均为 `admin`) 134 | 135 | --- 136 | 137 | ## 🛠 手动部署方式(可选) 138 | 139 | ### 🔹 Backend 140 | 141 | ```bash 142 | cd backend 143 | python -m venv .venv && source .venv/bin/activate 144 | pip install -r requirements.txt 145 | uvicorn main:app --host 0.0.0.0 --port 8000 146 | ``` 147 | 148 | 或使用 Docker: 149 | 150 | ```bash 151 | docker build -t brush-backend ./backend 152 | docker run -d -p 8000:8000 brush-backend 153 | ``` 154 | 155 | --- 156 | 157 | ### 🔹 Frontend 158 | 159 | ```bash 160 | cd frontend 161 | npm install 162 | npm run build 163 | 164 | # 本地预览(可选) 165 | npm install -g serve 166 | serve -s dist -l 3000 167 | ``` 168 | 169 | > 推荐使用 Nginx 进行反代部署。 170 | 171 | --- 172 | 173 | ### 🔹 Client 174 | 175 | ```bash 176 | cd client 177 | python -m venv .venv && source .venv/bin/activate 178 | pip install -r requirements.txt 179 | python run_client.py 180 | ``` 181 | 182 | --- 183 | 184 | ## 📄 示例 Nginx 配置 185 | 186 | ```nginx 187 | server { 188 | listen 8998 ssl; 189 | server_name yourdomain; 190 | 191 | ssl_certificate /path/to/server.crt; 192 | ssl_certificate_key /path/to/server.key; 193 | 194 | location / { 195 | root /path/to/frontend/dist; 196 | index index.html; 197 | try_files $uri /index.html; 198 | } 199 | 200 | location /api/ { 201 | proxy_pass http://127.0.0.1:8000/; 202 | proxy_http_version 1.1; 203 | proxy_set_header Host $host; 204 | proxy_set_header X-Real-IP $remote_addr; 205 | } 206 | 207 | location /ws/ { 208 | proxy_pass http://127.0.0.1:8000/; 209 | proxy_http_version 1.1; 210 | proxy_set_header Upgrade $http_upgrade; 211 | proxy_set_header Connection "upgrade"; 212 | } 213 | } 214 | ``` 215 | 216 | --- 217 | 218 | ## 🧩 后续计划 219 | 220 | - [ ] ✅ 支持更多验证码类型(ReCaptcha、FunCaptcha) 221 | - [ ] ⚙️ 引入打码节点优先级调度策略 222 | - [ ] 📈 增加打码成功率统计模块 223 | 224 | --- 225 | 226 | ## 💬 联系与支持 227 | 228 | 如有建议或问题,欢迎提交 [Issues](https://github.com/0xC0FFEE42/brush-captcha/issues) 或 PR 🙌 -------------------------------------------------------------------------------- /backend/Dockerfile: -------------------------------------------------------------------------------- 1 | # backend/Dockerfile 2 | FROM python:3.11-slim 3 | 4 | WORKDIR /app 5 | 6 | COPY . . 7 | 8 | RUN apt update && apt install -y build-essential \ 9 | && pip install --upgrade pip \ 10 | && pip install --no-cache-dir -r requirements.txt 11 | 12 | EXPOSE 8000 13 | 14 | CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] -------------------------------------------------------------------------------- /backend/README.md: -------------------------------------------------------------------------------- 1 | # 📘 Backend API Documentation 2 | 3 | Backend 项目统一提供 REST 和 WebSocket 接口,用于任务分发与处理。 4 | 5 | --- 6 | 7 | ## 🏷️ Task – 🎯 CAPTCHA Submission 8 | 9 | ### `POST /createTask` 10 | 11 | **Summary:** 创建打码任务 12 | **Content-Type:** `application/json` 13 | 14 | #### 📥 Request Body: 15 | ```json 16 | { 17 | "clientKey": "string", 18 | "task": { 19 | "type": "AntiTurnstileTaskProxyLess", //server端只校验这个值,检验通过后task中值会原样发送给client 20 | "websiteURL": "https://example.com", 21 | "websiteKey": "site-key-value" 22 | } 23 | } 24 | ``` 25 | 26 | #### 📤 Responses: 27 | 28 | - `200 OK` 29 | ```json 30 | { 31 | "errorId": 0, 32 | "taskId": "uuid-string" 33 | } 34 | ``` 35 | 36 | - `422 Validation Error` 37 | 38 | --- 39 | 40 | ### `POST /getTaskResult` 41 | 42 | **Summary:** 获取打码结果 43 | **Content-Type:** `application/json` 44 | 45 | #### 📥 Request Body: 46 | ```json 47 | { 48 | "clientKey": "string", //非必选,暂时没用 49 | "taskId": "uuid-string" 50 | } 51 | ``` 52 | 53 | #### 📤 Responses: 54 | 55 | - `200 OK` 56 | ```json 57 | { 58 | "errorId": 0, 59 | "status": "ready", 60 | "solution": { 61 | "token": "cf-turnstile-token" //server端不会处理solution,solution中内容是client上传的 62 | } 63 | } 64 | ``` 65 | 66 | - `422 Validation Error` 67 | 68 | --- 69 | 70 | ## 🏷️ REST – 📡 Worker & Task State 71 | 72 | ### `GET /api/nodes` 73 | 74 | **Summary:** 获取所有 Worker 节点状态 75 | 76 | #### 📤 Response: 77 | ```json 78 | [ 79 | { 80 | "id": "worker-id", 81 | "ip": "192.168.1.1", 82 | "task_types": ["AntiTurnstileTaskProxyLess"], 83 | "max_concurrency": 4, 84 | "current_tasks": 2, 85 | "pending_tasks": 1, 86 | "connected_at": "2025-04-12 16:40:12", 87 | "uptime": "126 秒", 88 | "status": "online" 89 | } 90 | ] 91 | ``` 92 | 93 | --- 94 | 95 | ### `GET /api/tasks` 96 | 97 | **Summary:** 获取任务池状态 98 | 99 | #### 📤 Response: 100 | ```json 101 | { 102 | "summary": { 103 | "total": 18553, 104 | "completed": 18333, 105 | "pending": 220, 106 | "assigned": 5 107 | }, 108 | "tasks": [ 109 | { 110 | "taskId": "8b7a2737-60d8-40f0-914b-7d7a91efd72c", 111 | "status": "processing", 112 | "assignedTo": "s2", 113 | "createdAt": "2025-04-12T09:23:34.344576", 114 | "type": "AntiTurnstileTaskProxyLess" 115 | } 116 | ] 117 | } 118 | ``` 119 | 120 | --- 121 | 122 | ## 🏷️ WebSocket – 🔌 节点注册与任务通信 123 | 124 | 连接地址: 125 | 126 | ``` 127 | ws:///worker/{worker_id} 128 | wss:///worker/{worker_id} 129 | ``` 130 | 131 | ### 支持消息类型: 132 | 133 | | type | 描述 | 示例字段 | 134 | |------------------|------------------|------------------------| 135 | | `register` | Worker 注册 | `task_types`, `ip` | 136 | | `status_update` | 主动上报当前状态(当前任务数等) | `current_tasks` | 137 | | `task_result` | 返回任务执行结果 | `taskId`, `result` | 138 | 139 | ### ✅ 示例注册请求: 140 | ```json 141 | { 142 | "type": "register", 143 | "task_types": ["AntiTurnstileTaskProxyLess"], 144 | "ip": "192.168.1.10" 145 | } 146 | ``` 147 | 148 | ### ✅ 示例任务结果上报: 149 | ```json 150 | { 151 | "type": "task_result", 152 | "taskId": "uuid-string", 153 | "result": { 154 | "token": "cf-turnstile-token" 155 | } 156 | } 157 | ``` 158 | 159 | --- 160 | 161 | ## 🧪 调试与开发 162 | 163 | 本地启动命令: 164 | 165 | ```bash 166 | uvicorn main:app --reload 167 | ``` 168 | ## ▶️ 一键启动服务 169 | 170 | 本地开发启动 FastAPI 服务: 171 | 172 | ```bash 173 | uvicorn main:app --host 0.0.0.0 --port 8000 --reload 174 | ``` 175 | 176 | Docker 启动(使用已构建镜像): 177 | 178 | ```bash 179 | docker run --rm -it \ 180 | -v "$(pwd)/config.yaml:/app/config.yaml" \ 181 | --name capsolver-client \ 182 | hexbrewe/capsolver-client:latest 183 | ``` 184 | 开发调试访问: 185 | 186 | - Swagger UI: http://localhost:8000/docs 187 | - ReDoc: http://localhost:8000/redoc 188 | 189 | --- 190 | -------------------------------------------------------------------------------- /backend/api_server/routes.py: -------------------------------------------------------------------------------- 1 | from http.client import HTTPException 2 | 3 | from fastapi import APIRouter 4 | from fastapi.responses import JSONResponse 5 | from schemas.task import CreateTaskRequest, GetTaskRequest 6 | from core.task_manager import task_pool 7 | from core.worker_manager import worker_pool, get_worker_status 8 | from uuid import uuid4 9 | from datetime import datetime 10 | from datetime import datetime, timedelta, timezone 11 | from fastapi.encoders import jsonable_encoder 12 | from core.task_dispatcher import Status 13 | from core.task_stats import task_stats 14 | from common.logger import get_logger,emoji 15 | 16 | from user.users_manager import ClientKeyStore 17 | from user.users_manager import ClientKeyStore 18 | 19 | logger = get_logger("routes") 20 | 21 | # router = APIRouter() 22 | 23 | CST = timezone(timedelta(hours=8)) 24 | 25 | def serialize_worker(info): 26 | connected_at = info.get("connected_at") 27 | 28 | if connected_at and connected_at.tzinfo is None: 29 | connected_at = connected_at.replace(tzinfo=timezone.utc) 30 | 31 | return { 32 | "id": info.get("id"), 33 | "ip": info.get("ip", "unknown"), 34 | "task_types": info.get("task_types", []), 35 | "max_concurrency": info.get("max_concurrency", 0), 36 | "current_tasks": info.get("current_tasks", 0), 37 | "pending_tasks": info.get("pending_tasks", 0), 38 | "connected_at": connected_at.astimezone(CST).strftime("%Y-%m-%d %H:%M:%S") if connected_at else None, 39 | "uptime": str((datetime.now(tz=CST) - connected_at.astimezone(CST)).seconds) + " 秒" if connected_at else "N/A", 40 | "status": get_worker_status(info)["status"] 41 | } 42 | def create_router(store:ClientKeyStore)-> APIRouter: 43 | router = APIRouter() 44 | @router.post("/createTask",tags=["Task"]) 45 | async def create_task(req: CreateTaskRequest): 46 | client_key = req.clientKey 47 | logger.debug(store.get_all()) 48 | if not await store.has_key(client_key): 49 | logger.info(emoji("ERROR", f"鉴权失败:{client_key}")) 50 | return {"errorId": -2, "msg": "user is not registered"} 51 | task_id = str(uuid4()) 52 | task_pool[task_id] = { 53 | "taskId": task_id, 54 | "clientKey": req.clientKey, 55 | "status": Status.WAITING, 56 | "assignedTo": None, 57 | "createdAt": datetime.utcnow().replace(tzinfo=timezone.utc).astimezone(CST).strftime("%Y-%m-%d %H:%M:%S"), 58 | "type": req.task.get("type", "Unknown"), 59 | "payload": req.task, 60 | "errorId": 0 61 | } 62 | await task_stats.increment_total() 63 | return {"errorId": 0, "taskId": task_id} 64 | 65 | @router.post("/getTaskResult",tags=["Task"]) 66 | async def get_task(req: GetTaskRequest): 67 | task = task_pool.get(req.taskId) 68 | if not task: 69 | return JSONResponse(status_code=404, content={"errorId": 1, "errorDescription": "task not found"}) 70 | if task["status"] == Status.SUCCESS: 71 | if task["errorId"] == 0: 72 | return {"errorId": 0,"taskId":req.taskId, "status": "ready", "solution": task["result"]} 73 | else: 74 | return {"errorId": task["errorId"], "taskId": req.taskId, "status": "ready", "solution": task["result"]} 75 | else: 76 | # return {"errorId": 0, "status": task["status"]} 77 | logger.debug(f"getTaskResult {task}") 78 | return {"errorId": 0, "status": "processing"} 79 | 80 | @router.get("/nodes",tags=["REST"]) 81 | async def get_nodes(): 82 | return jsonable_encoder([serialize_worker(info) for info in worker_pool.values()]) 83 | 84 | @router.get("/tasks",tags=["REST"]) 85 | async def get_tasks(): 86 | def safe_str(val): 87 | try: 88 | return str(val) 89 | except: 90 | return "" 91 | 92 | # 过滤未完成任务 93 | active_tasks = [t for t in task_pool.values() if t.get("status") != Status.SUCCESS] 94 | 95 | # 汇总统计 96 | summary = await task_stats.get_stats() 97 | assigned = sum(1 for t in active_tasks if t.get("assignedTo")) 98 | 99 | return { 100 | "summary": { 101 | "total": summary["total"], 102 | "completed": summary["completed"], 103 | "pending": summary["pending"], 104 | "assigned": assigned, 105 | }, 106 | "tasks": jsonable_encoder([ 107 | { 108 | "taskId": t.get("taskId"), 109 | "status": t.get("status"), 110 | "assignedTo": safe_str(t.get("assignedTo")), 111 | "createdAt": ( 112 | t.get("createdAt").isoformat() if isinstance(t.get("createdAt"), datetime) 113 | else t.get("createdAt") if t.get("createdAt") 114 | else None 115 | ), 116 | "type": t.get("type") 117 | } 118 | for t in active_tasks 119 | ]) 120 | } 121 | return router -------------------------------------------------------------------------------- /backend/api_server/ws_router.py: -------------------------------------------------------------------------------- 1 | from fastapi import APIRouter, WebSocket, WebSocketDisconnect 2 | 3 | from core.task_stats import task_stats 4 | from core.worker_manager import register_worker, update_worker_status, worker_pool 5 | from core.task_manager import task_pool 6 | from datetime import datetime 7 | from core.task_manager import Status 8 | from common.logger import get_logger,emoji 9 | logger = get_logger("ws_routes") 10 | router = APIRouter() 11 | @router.websocket("/worker/{worker_id}") 12 | async def worker_ws(ws: WebSocket, worker_id: str): 13 | await ws.accept() 14 | try: 15 | while True: 16 | msg = await ws.receive_json() 17 | logger.debug(f"received message: {msg}") 18 | if msg["type"] == "register": 19 | register_worker(worker_id, ws, msg) 20 | elif msg["type"] == "status_update": 21 | update_worker_status(worker_id, msg) 22 | elif msg["type"] == "task_result": 23 | logger.debug(f"Task result: {msg}") 24 | task_id = msg["taskId"] 25 | error_id = msg["errorId"] 26 | if task_id in task_pool: 27 | task_pool[task_id]["status"] = Status.SUCCESS 28 | task_pool[task_id]["errorId"] = error_id 29 | task_pool[task_id]["result"] = msg["result"] 30 | await task_stats.increment_completed() 31 | # if error_id != 0: 32 | # task_pool[task_id]["status"] = Status.SUCCESS 33 | # task_pool[task_id]["errorId"] = error_id 34 | # task_pool[task_id]["result"] = msg["result"] 35 | # await task_stats.increment_completed() 36 | # else: 37 | # task_pool[task_id]["status"] = Status.SUCCESS 38 | # task_pool[task_id]["errorId"] = 0 39 | # task_pool[task_id]["result"] = msg["result"] 40 | # await task_stats.increment_completed() 41 | except WebSocketDisconnect: 42 | worker_pool.pop(worker_id, None) 43 | -------------------------------------------------------------------------------- /backend/common/logger.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | from logging.handlers import RotatingFileHandler 4 | 5 | LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper() 6 | LOG_DIR = os.getenv("LOG_DIR", "./logs") 7 | os.makedirs(LOG_DIR, exist_ok=True) 8 | 9 | def get_logger(name: str) -> logging.Logger: 10 | logger = logging.getLogger(name) 11 | if logger.hasHandlers(): 12 | return logger 13 | 14 | logger.setLevel(LOG_LEVEL) 15 | 16 | formatter = logging.Formatter("[%(asctime)s] [%(levelname)s] [%(name)s] %(message)s") 17 | 18 | # 控制台输出 19 | ch = logging.StreamHandler() 20 | ch.setFormatter(formatter) 21 | logger.addHandler(ch) 22 | 23 | # 文件输出(按模块名称区分) 24 | fh = RotatingFileHandler(f"{LOG_DIR}/{name}.log", maxBytes=10 * 1024 * 1024, backupCount=3) 25 | fh.setFormatter(formatter) 26 | logger.addHandler(fh) 27 | 28 | return logger 29 | # common/emoji_log.py 30 | def emoji(level: str, message: str) -> str: 31 | tags = { 32 | "DEBUG": "🐞", 33 | "INFO": "ℹ️", 34 | "SUCCESS": "✅", 35 | "WARNING": "⚠️", 36 | "ERROR": "❌", 37 | "CRITICAL": "🔥", 38 | "TASK": "📌", 39 | "STARTUP": "🚀", 40 | "SHUTDOWN": "🛑", 41 | "NETWORK": "🌐", 42 | "DB": "🗃️" 43 | } 44 | return f"{tags.get(level.upper(), '')} {message}" -------------------------------------------------------------------------------- /backend/core/task_dispatcher.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import time 3 | 4 | from core.task_manager import task_pool,Status 5 | from core.worker_manager import worker_pool 6 | from common.logger import get_logger,emoji 7 | from proxy.proxy_manager import ProxyManager 8 | logger = get_logger("task_dispatcher") 9 | proxy_mgr = ProxyManager() 10 | # 以任务循环,适合少量机器,保证最优空闲调度 11 | async def dispatch_tasks_by_tasks(): 12 | for task in task_pool.values(): 13 | if task["status"] != Status.WAITING: 14 | continue 15 | # 找出支持任务类型、且未满载的 worker 16 | candidates = [ 17 | (wid, w) for wid, w in worker_pool.items() 18 | if task["type"] in w["task_types"] 19 | and w["current_tasks"] < w["max_concurrency"] 20 | ] 21 | 22 | if not candidates: 23 | continue # 暂无可接此任务的 worker 24 | 25 | # 计算空闲数并按空闲数降序排序(空闲多的排前) 26 | candidates.sort(key=lambda x: x[1]["max_concurrency"] - x[1]["current_tasks"], reverse=True) 27 | 28 | worker_id, worker = candidates[0] 29 | 30 | try: 31 | proxy_config = proxy_mgr.assign_proxy() 32 | if proxy_config: 33 | logger.debug(f"proxy config: {proxy_config}") 34 | else: 35 | logger.error(emoji("ERROR","没有可用代理,等待1分钟再开始分配")) 36 | await asyncio.sleep(60) 37 | continue 38 | await worker["ws"].send_json({ 39 | "type": "new_task", 40 | "proxy": proxy_config, 41 | "task": { 42 | "taskId": task["taskId"], 43 | "clientKey": task["clientKey"], 44 | **task["payload"] 45 | } 46 | }) 47 | task["status"] = Status.PENDING 48 | task["assignedTo"] = worker_id 49 | worker["current_tasks"] += 1 50 | logger.info(f"📤 分发任务 {task['taskId']} → {worker_id}") 51 | except Exception as e: 52 | logger.info(f"❌ 发送失败 {worker_id}: {e}") 53 | continue 54 | # 以worker循环,适合大量worker 55 | # 按当前负载比例排序,优先分发给更空闲的 worker 56 | async def dispatch_tasks_by_worker(): 57 | sorted_workers = sorted( 58 | worker_pool.items(), 59 | key=lambda x: (x[1]["current_tasks"] / x[1]["max_concurrency"]) if x[1]["max_concurrency"] > 0 else 1.0 60 | ) 61 | 62 | for worker_id, worker in sorted_workers: 63 | available_slots = worker["max_concurrency"] - worker["current_tasks"] 64 | if available_slots <= 0: 65 | continue 66 | 67 | for task in task_pool.values(): 68 | if task["status"] != Status.WAITING: 69 | continue 70 | if task["type"] not in worker["task_types"]: 71 | continue 72 | 73 | try: 74 | proxy_config = proxy_mgr.assign_proxy() 75 | if proxy_config: 76 | logger.debug(f"proxy config: {proxy_config}") 77 | else: 78 | logger.error(emoji("ERROR", "没有可用代理,等待1分钟再开始分配")) 79 | await asyncio.sleep(60) 80 | continue 81 | await worker["ws"].send_json({ 82 | "type": "new_task", 83 | "proxy": proxy_config, 84 | "task": { 85 | "taskId": task["taskId"], 86 | "clientKey": task["clientKey"], 87 | **task["payload"] 88 | } 89 | }) 90 | task["status"] = Status.PENDING 91 | task["assignedTo"] = worker_id 92 | worker["current_tasks"] += 1 93 | logger.info(f"📤 分发任务 {task['taskId']} → {worker_id}") 94 | break # 只分一个任务给这个 worker 95 | except Exception as e: 96 | logger.warning(f"❌ 分发任务 {task['taskId']} 到 {worker_id} 失败: {e}") 97 | continue 98 | # 主循环 99 | async def scheduler_loop(): 100 | while True: 101 | try: 102 | await dispatch_tasks_by_worker() 103 | except Exception as e: 104 | logger.error(f"❌ 调度器异常:{e}") 105 | await asyncio.sleep(0.3) # 分发频率 -------------------------------------------------------------------------------- /backend/core/task_manager.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from datetime import datetime, timedelta 3 | from enum import Enum 4 | from common.logger import get_logger,emoji 5 | from datetime import datetime 6 | logger = get_logger("task_manager") 7 | task_pool = {} 8 | TASK_TIMEOUT_SECONDS = 300 #任务超时时间 9 | class Status(Enum): 10 | PENDING = "processing" 11 | WAITING = "idle" 12 | SUCCESS = "ready" 13 | TIMEOUT = "timeout" 14 | 15 | async def cleanup_expired_tasks(): 16 | while True: 17 | now = datetime.utcnow() 18 | expired = [ 19 | task_id for task_id, t in task_pool.items() 20 | if t['status'] != Status.SUCCESS and 21 | (now - ( 22 | t['createdAt'] if isinstance(t['createdAt'], datetime) 23 | else datetime.fromisoformat(t['createdAt']) 24 | )).total_seconds() > TASK_TIMEOUT_SECONDS 25 | ] 26 | for task_id in expired: 27 | del task_pool[task_id] 28 | 29 | await asyncio.sleep(30) 30 | -------------------------------------------------------------------------------- /backend/core/task_stats.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | 3 | class TaskStats: 4 | def __init__(self): 5 | self.total = 0 6 | self.completed = 0 7 | self.lock = asyncio.Lock() 8 | 9 | async def increment_total(self): 10 | async with self.lock: 11 | self.total += 1 12 | 13 | async def increment_completed(self): 14 | async with self.lock: 15 | self.completed += 1 16 | 17 | async def get_stats(self): 18 | async with self.lock: 19 | return { 20 | "total": self.total, 21 | "completed": self.completed, 22 | "pending": self.total - self.completed 23 | } 24 | 25 | task_stats = TaskStats() -------------------------------------------------------------------------------- /backend/core/worker_manager.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | from common.logger import get_logger,emoji 3 | logger = get_logger("worker_manager") 4 | worker_pool = {} 5 | 6 | def register_worker(worker_id, ws, data): 7 | worker_pool[worker_id] = { 8 | "id": worker_id, 9 | "ws": ws, 10 | "task_types": data["task_types"], 11 | "max_concurrency": data["max_concurrency"], 12 | "current_tasks": 0, 13 | "pending_tasks": 0, 14 | "connected_at": datetime.utcnow(), 15 | "last_ping": datetime.utcnow(), 16 | "ip": ws.client.host, 17 | } 18 | 19 | def update_worker_status(worker_id, status): 20 | if worker_id in worker_pool: 21 | worker_pool[worker_id]["current_tasks"] = status.get("current_tasks", 0) 22 | worker_pool[worker_id]["pending_tasks"] = status.get("pending_tasks", 0) 23 | worker_pool[worker_id]["last_ping"] = datetime.utcnow() 24 | 25 | def get_worker_status(info): 26 | if info["current_tasks"] + info["pending_tasks"] == 0: 27 | status = "空闲" 28 | elif info["current_tasks"] >= info["max_concurrency"]: 29 | status = "忙碌" 30 | else: 31 | status = "工作中" 32 | return { 33 | **info, 34 | "status": status, 35 | "uptime": str(datetime.utcnow() - info["connected_at"]), 36 | } 37 | -------------------------------------------------------------------------------- /backend/main.py: -------------------------------------------------------------------------------- 1 | from fastapi import FastAPI 2 | from api_server.routes import create_router 3 | from api_server.ws_router import router as ws_router 4 | from core.task_manager import cleanup_expired_tasks 5 | import asyncio 6 | from fastapi.middleware.cors import CORSMiddleware 7 | from core.task_dispatcher import scheduler_loop 8 | from common.logger import get_logger,emoji 9 | from user.users_manager import ClientKeyStore 10 | logger = get_logger("main") 11 | store = ClientKeyStore() 12 | # logger.info(emoji("STARTUP","开始启动")) 13 | tags_metadata = [ 14 | { 15 | "name": "Task", 16 | "description": "打码任务创建和结果获取,兼容capsolver接口", 17 | }, 18 | { 19 | "name": "REST", 20 | "description": "worker节点与任务状态查询,用于前端显示", 21 | }, 22 | { 23 | "name": "WebSocket", 24 | "description": "worker节点注册、状态更新、上报数据、分发任务", 25 | } 26 | ] 27 | app = FastAPI(title="Backend", 28 | description="HTTP + WS API", 29 | version="1.0.0", 30 | openapi_tags=tags_metadata 31 | ) 32 | app.add_middleware( 33 | CORSMiddleware, 34 | allow_origins=["*"], 35 | allow_credentials=True, 36 | allow_methods=["*"], 37 | allow_headers=["*"], 38 | ) 39 | app.include_router(create_router(store)) 40 | app.include_router(ws_router) 41 | 42 | @app.on_event("startup") 43 | async def startup(): 44 | await store.load() 45 | logger.debug(store.get_all()) 46 | asyncio.create_task(cleanup_expired_tasks()) 47 | asyncio.create_task(scheduler_loop()) 48 | 49 | # print(app.routes) 50 | 51 | # if __name__ == "__main__": 52 | # import uvicorn 53 | # uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) 54 | -------------------------------------------------------------------------------- /backend/proxy/proxy_manager.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import time 4 | from .utils import parse_proxy_line 5 | from common.logger import get_logger,emoji 6 | logger = get_logger("proxy_manager") 7 | USAGE_FILE = os.path.join(os.path.dirname(__file__), "proxy_usage.json") 8 | 9 | class Proxy: 10 | def __init__(self, raw, usage_data=None): 11 | parsed = parse_proxy_line(raw) 12 | self.raw = raw 13 | self.ip = parsed["ip"] 14 | self.port = parsed["port"] 15 | self.user = parsed["user"] 16 | self.password = parsed["pass"] 17 | self.key = f"{self.ip}:{self.port}" 18 | 19 | usage = usage_data.get(self.key, {}) if usage_data else {} 20 | self.usage_count = usage.get("usage_count", 0) 21 | self.last_assigned_time = usage.get("last_assigned_time", 0) 22 | 23 | def is_available(self, now_ts, cooldown=600): 24 | return (now_ts - self.last_assigned_time) >= cooldown 25 | 26 | def get_proxy_config(self): 27 | return { 28 | "server": f"http://{self.ip}:{self.port}", 29 | "username": self.user, 30 | "password": self.password 31 | } 32 | 33 | def to_usage_record(self): 34 | return { 35 | "usage_count": self.usage_count, 36 | "last_assigned_time": self.last_assigned_time 37 | } 38 | class ProxyManager: 39 | def __init__(self, path="proxy/proxies.txt"): 40 | self.proxies = [] 41 | self.usage_data = self._load_usage() 42 | self.load_proxies(path) 43 | 44 | def _load_usage(self): 45 | if os.path.exists(USAGE_FILE): 46 | with open(USAGE_FILE, "r") as f: 47 | data = json.loads(f.read()) 48 | logger.info(emoji("SUCCESS", f"共加载了{len(data)}条IP")) 49 | return data 50 | return {} 51 | 52 | def _save_usage(self): 53 | data = {p.key: p.to_usage_record() for p in self.proxies} 54 | with open(USAGE_FILE, "w") as f: 55 | json.dump(data, f, indent=2) 56 | 57 | def load_proxies(self, path): 58 | with open(path, "r") as f: 59 | for line in f: 60 | if line.strip(): 61 | try: 62 | self.proxies.append(Proxy(line.strip(), self.usage_data)) 63 | except ValueError as e: 64 | logger.info(emoji("ERROR",f"跳过无效代理:{line.strip()}")) 65 | def assign_proxy(self): 66 | now = time.time() 67 | available = [p for p in self.proxies if p.is_available(now)] 68 | if not available: 69 | return None 70 | 71 | available.sort(key=lambda p: (p.usage_count, p.last_assigned_time)) 72 | proxy = available[0] 73 | proxy.usage_count += 1 74 | proxy.last_assigned_time = now 75 | self._save_usage() 76 | return proxy.get_proxy_config() 77 | # proxy_mgr = ProxyManager() 78 | # proxy_config = proxy_mgr.assign_proxy() 79 | # if proxy_config: 80 | # print(proxy_config) 81 | # else: 82 | # print("没有可用代理") -------------------------------------------------------------------------------- /backend/proxy/utils.py: -------------------------------------------------------------------------------- 1 | import re 2 | 3 | def parse_proxy_line(line: str) -> dict: 4 | line = line.strip() 5 | 6 | # user:pass@ip:port 7 | m = re.match(r'(?P[^:@]+):(?P[^@]+)@(?P[^:]+):(?P\d+)', line) 8 | if m: 9 | return m.groupdict() 10 | 11 | # ip:port:user:pass 12 | parts = line.split(':') 13 | if len(parts) == 4: 14 | return { 15 | 'ip': parts[0], 16 | 'port': parts[1], 17 | 'user': parts[2], 18 | 'pass': parts[3] 19 | } 20 | 21 | # ip:port 22 | if len(parts) == 2: 23 | return { 24 | 'ip': parts[0], 25 | 'port': parts[1], 26 | 'user': None, 27 | 'pass': None 28 | } 29 | 30 | raise ValueError(f"无法解析代理格式: {line}") -------------------------------------------------------------------------------- /backend/requirements.txt: -------------------------------------------------------------------------------- 1 | annotated-types==0.7.0 2 | anyio==4.9.0 3 | click==8.1.8 4 | fastapi==0.115.12 5 | h11==0.14.0 6 | idna==3.10 7 | pydantic==2.11.3 8 | pydantic_core==2.33.1 9 | sniffio==1.3.1 10 | starlette==0.46.1 11 | typing-inspection==0.4.0 12 | typing_extensions==4.13.1 13 | uvicorn==0.34.0 14 | websockets==15.0.1 15 | aiofiles -------------------------------------------------------------------------------- /backend/schemas/task.py: -------------------------------------------------------------------------------- 1 | from pydantic import BaseModel 2 | 3 | class CreateTaskRequest(BaseModel): 4 | clientKey: str 5 | task: dict 6 | 7 | class GetTaskRequest(BaseModel): 8 | clientKey: str 9 | taskId: str 10 | -------------------------------------------------------------------------------- /backend/user/users_manager.py: -------------------------------------------------------------------------------- 1 | # utils/client_key_store.py 2 | import aiofiles 3 | import asyncio 4 | from typing import List 5 | from asyncio import Lock 6 | from common.logger import get_logger,emoji 7 | 8 | logger = get_logger("users_manager") 9 | 10 | 11 | class ClientKeyStore: 12 | def __init__(self, file_path: str = "user/user_keys.txt"): 13 | self.file_path = file_path 14 | self._keys: List[str] = [] 15 | self._lock = Lock() 16 | 17 | async def load(self) -> None: 18 | async with self._lock: 19 | try: 20 | async with aiofiles.open(self.file_path, mode="r") as f: 21 | content = await f.read() 22 | logger.debug(content) 23 | self._keys = content.splitlines() 24 | except FileNotFoundError: 25 | self._keys = [] 26 | 27 | def get_all(self) -> List[str]: 28 | return self._keys 29 | 30 | def add_key(self, key: str) -> None: 31 | if key not in self._keys: 32 | self._keys.append(key) 33 | 34 | def remove_key(self, key: str) -> None: 35 | if key in self._keys: 36 | self._keys.remove(key) 37 | 38 | async def has_key(self, key: str) -> bool: 39 | async with self._lock: 40 | return key in self._keys 41 | 42 | async def save(self) -> None: 43 | async with self._lock: 44 | async with aiofiles.open(self.file_path, mode="w") as f: 45 | await f.write("\n".join(self._keys)) 46 | -------------------------------------------------------------------------------- /client/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM mcr.microsoft.com/playwright/python:v1.43.0-jammy 2 | 3 | WORKDIR /app 4 | COPY . . 5 | 6 | # 安装 Python 依赖和 Camoufox 7 | RUN pip install --no-cache-dir -r requirements.txt \ 8 | && pip install --no-cache-dir camoufox \ 9 | && pip install --no-cache-dir 'camoufox[GeoIP]' 10 | 11 | # 触发 GeoIP 下载(模拟一次带 geoip=True 的 Camoufox 启动) 12 | RUN python -c "from camoufox import Camoufox; Camoufox(geoip=True, headless=True).start().close()" 13 | # 启动任务 14 | ENTRYPOINT ["python", "run_client.py"] 15 | -------------------------------------------------------------------------------- /client/common/logger.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | from logging.handlers import RotatingFileHandler 4 | 5 | LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper() 6 | LOG_DIR = os.getenv("LOG_DIR", "./logs") 7 | os.makedirs(LOG_DIR, exist_ok=True) 8 | 9 | def get_logger(name: str) -> logging.Logger: 10 | logger = logging.getLogger(name) 11 | if logger.hasHandlers(): 12 | return logger 13 | 14 | logger.setLevel(LOG_LEVEL) 15 | 16 | formatter = logging.Formatter("[%(asctime)s] [%(levelname)s] [%(name)s] %(message)s") 17 | 18 | # 控制台输出 19 | ch = logging.StreamHandler() 20 | ch.setFormatter(formatter) 21 | logger.addHandler(ch) 22 | 23 | # 文件输出(按模块名称区分) 24 | fh = RotatingFileHandler(f"{LOG_DIR}/{name}.log", maxBytes=10 * 1024 * 1024, backupCount=3) 25 | fh.setFormatter(formatter) 26 | logger.addHandler(fh) 27 | 28 | return logger 29 | # common/emoji_log.py 30 | def emoji(level: str, message: str) -> str: 31 | tags = { 32 | "DEBUG": "🐞", 33 | "INFO": "ℹ️", 34 | "SUCCESS": "✅", 35 | "WARNING": "⚠️", 36 | "ERROR": "❌", 37 | "CRITICAL": "🔥", 38 | "TASK": "📌", 39 | "GETTASK":"📥", 40 | "STARTUP": "🚀", 41 | "SHUTDOWN": "🛑", 42 | "NETWORK": "🌐", 43 | "DB": "🗃️", 44 | "WAIT":"⏳", 45 | } 46 | return f"{tags.get(level.upper(), '')} {message}" -------------------------------------------------------------------------------- /client/core/system_resources.py: -------------------------------------------------------------------------------- 1 | import os 2 | import psutil 3 | 4 | def auto_concurrency(): 5 | logical_cpu = psutil.cpu_count(logical=True) 6 | physical_cpu = psutil.cpu_count(logical=False) or 1 7 | mem_gb = psutil.virtual_memory().available / (1024 ** 3) 8 | 9 | # 每 1 GB 支持一个并发 10 | mem_based = int(mem_gb) 11 | 12 | # 限制最大并发:不超过物理核心数的 2 倍 13 | max_concurrency = physical_cpu * 2 14 | 15 | # min(逻辑核, 内存估算, 最大限制),最少为 1 16 | return max(1, min(logical_cpu, mem_based, max_concurrency)) -------------------------------------------------------------------------------- /client/core/ws_client.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | import ssl 4 | import sys 5 | import websockets 6 | import json 7 | import yaml 8 | import importlib 9 | import traceback 10 | from framework.solver_core import get_solver_config 11 | from core.system_resources import auto_concurrency 12 | from common.logger import get_logger, emoji 13 | 14 | logger = get_logger("ws_client") 15 | 16 | with open("config/config.yaml", "r") as f: 17 | config = yaml.safe_load(f) 18 | 19 | MAX_CONCURRENCY = config.get("concurrency") or auto_concurrency() 20 | task_queue = asyncio.Queue(maxsize=MAX_CONCURRENCY * 2) 21 | semaphore = asyncio.Semaphore(MAX_CONCURRENCY) 22 | 23 | logger.info(emoji("TASK", f"最大允许线程数:{MAX_CONCURRENCY}")) 24 | 25 | def safe_import_handler(module_name: str, filename: str): 26 | try: 27 | return importlib.import_module(f"task_handlers.{module_name}") 28 | except Exception: 29 | pass 30 | 31 | try: 32 | path = os.path.join("task_handlers", filename) 33 | if not os.path.exists(path): 34 | logger.error(f"❌ 文件不存在: {path}") 35 | return None 36 | spec = importlib.util.spec_from_file_location(module_name, path) 37 | if not spec: 38 | logger.error("❌ 创建模块 spec 失败") 39 | return None 40 | module = importlib.util.module_from_spec(spec) 41 | sys.modules[module_name] = module 42 | spec.loader.exec_module(module) 43 | return module 44 | except Exception as e: 45 | logger.error(f"❌ 模块加载失败: {e}") 46 | logger.debug(traceback.format_exc()) 47 | return None 48 | 49 | async def run_task(task, proxy): 50 | module_name = task["type"] 51 | filename = f"{module_name}.py" 52 | handler = safe_import_handler(module_name, filename) 53 | if not handler: 54 | raise RuntimeError(f"无法加载 handler: {module_name}") 55 | 56 | try: 57 | if asyncio.iscoroutinefunction(handler.run): 58 | result = await handler.run(task, proxy) 59 | while asyncio.iscoroutine(result): 60 | result = await result 61 | return result 62 | else: 63 | loop = asyncio.get_event_loop() 64 | return await loop.run_in_executor(None, handler.run, task) 65 | finally: 66 | # 自动清理钩子 67 | if hasattr(handler, "cleanup") and asyncio.iscoroutinefunction(handler.cleanup): 68 | try: 69 | await handler.cleanup() 70 | except Exception as e: 71 | logger.warning(f"⚠️ cleanup 执行失败: {e}") 72 | 73 | async def task_worker(ws): 74 | while True: 75 | task, proxy = await task_queue.get() 76 | async with semaphore: 77 | try: 78 | result = await run_task(task, proxy) 79 | await ws.send(json.dumps({ 80 | "type": "task_result", 81 | "taskId": task["taskId"], 82 | "errorId": 0, 83 | "result": result 84 | })) 85 | except Exception as e: 86 | logger.error(f"❌ 任务执行异常: {e}") 87 | logger.debug(traceback.format_exc()) 88 | await ws.send(json.dumps({ 89 | "type": "task_result", 90 | "taskId": task.get("taskId"), 91 | "errorId": -1, 92 | "result": {"error": str(e)} 93 | })) 94 | task_queue.task_done() 95 | 96 | async def heartbeat(ws): 97 | while True: 98 | running_tasks = MAX_CONCURRENCY - semaphore._value 99 | waiting_tasks = task_queue.qsize() 100 | await ws.send(json.dumps({ 101 | "type": "status_update", 102 | "current_tasks": running_tasks + waiting_tasks, 103 | "pending_tasks": running_tasks 104 | })) 105 | await asyncio.sleep(10) 106 | 107 | async def receiver(ws): 108 | while True: 109 | msg = await ws.recv() 110 | data = json.loads(msg) 111 | task = data.get("task") 112 | proxy = data.get("proxy") 113 | logger.info(emoji("GETTASK", f"接收到任务: {task['type']} - {task['taskId']}")) 114 | await task_queue.put((task, proxy)) 115 | 116 | async def worker_main(): 117 | uri = config.get("worker").get("wss_url") + config.get("worker").get("name") 118 | 119 | while True: 120 | tasks = [] 121 | ssl_ctx = None 122 | if uri.startswith("wss://"): 123 | ssl_ctx = ssl._create_unverified_context() 124 | try: 125 | async with websockets.connect(uri,ssl=ssl_ctx) as ws: 126 | await ws.send(json.dumps({ 127 | "type": "register", 128 | "task_types": get_solver_config().get("solver_type"), 129 | "max_concurrency": MAX_CONCURRENCY 130 | })) 131 | logger.info(emoji("SUCCESS", f"已注册: {uri}")) 132 | 133 | tasks.append(asyncio.create_task(heartbeat(ws))) 134 | tasks.append(asyncio.create_task(receiver(ws))) 135 | for _ in range(MAX_CONCURRENCY): 136 | tasks.append(asyncio.create_task(task_worker(ws))) 137 | 138 | await asyncio.gather(*tasks) 139 | 140 | except Exception as e: 141 | logger.warning(emoji("ERROR", f"连接断开: {e}")) 142 | logger.debug(traceback.format_exc()) 143 | finally: 144 | for task in tasks: 145 | task.cancel() 146 | await asyncio.gather(*tasks, return_exceptions=True) 147 | await asyncio.sleep(5) 148 | 149 | if __name__ == "__main__": 150 | asyncio.run(worker_main()) 151 | -------------------------------------------------------------------------------- /client/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | client: 3 | build: 4 | context: . 5 | dockerfile: Dockerfile 6 | image: capsolver-client:dev 7 | container_name: capsolver-client 8 | volumes: 9 | - ./config.yaml:/app/config/config.yaml 10 | restart: always -------------------------------------------------------------------------------- /client/framework/solver_core.py: -------------------------------------------------------------------------------- 1 | import yaml 2 | 3 | with open("config/config.yaml", "r") as f: 4 | config = yaml.safe_load(f) 5 | 6 | def get_proxy_url(): 7 | proxy_cfg = config.get("proxy") 8 | if proxy_cfg: 9 | proxy = { 10 | "server": proxy_cfg["server"], 11 | "username": proxy_cfg["username"], 12 | "password": proxy_cfg["password"], 13 | } 14 | return proxy 15 | return None 16 | 17 | def get_solver_config(): 18 | return config.get("camoufox", {}) 19 | -------------------------------------------------------------------------------- /client/requirements.txt: -------------------------------------------------------------------------------- 1 | aiofiles==24.1.0 2 | aiohappyeyeballs==2.6.1 3 | aiohttp==3.11.16 4 | aiosignal==1.3.2 5 | annotated-types==0.7.0 6 | anyio==4.9.0 7 | asyncio==3.4.3 8 | attrs==25.3.0 9 | blinker==1.9.0 10 | browserforge==1.2.3 11 | cachetools==5.5.2 12 | camoufox==0.4.11 13 | certifi==2025.1.31 14 | charset-normalizer==3.4.1 15 | click==8.1.8 16 | contourpy==1.3.2 17 | cycler==0.12.1 18 | Cython==3.0.12 19 | Flask==3.1.0 20 | fonttools==4.57.0 21 | frozenlist==1.5.0 22 | geoip2==5.0.1 23 | google-auth==2.39.0 24 | google-genai==1.10.0 25 | greenlet==3.1.1 26 | h11==0.14.0 27 | h2==4.2.0 28 | hcaptcha-challenger==0.15.3 29 | hpack==4.1.0 30 | httpcore==1.0.8 31 | httpx==0.28.1 32 | Hypercorn==0.17.3 33 | hyperframe==6.1.0 34 | idna==3.10 35 | itsdangerous==2.2.0 36 | Jinja2==3.1.6 37 | kiwisolver==1.4.8 38 | language-tags==1.2.0 39 | loguru==0.7.3 40 | lxml==5.3.2 41 | markdown-it-py==3.0.0 42 | MarkupSafe==3.0.2 43 | matplotlib==3.10.1 44 | maxminddb==2.6.3 45 | mdurl==0.1.2 46 | msgpack==1.1.0 47 | multidict==6.2.0 48 | numpy==2.2.4 49 | opencv-python==4.11.0.86 50 | orjson==3.10.16 51 | packaging==24.2 52 | patchright==1.51.3 53 | pillow==11.2.1 54 | platformdirs==4.3.7 55 | playwright==1.51.0 56 | priority==2.0.0 57 | propcache==0.3.1 58 | psutil==7.0.0 59 | pyasn1==0.6.1 60 | pyasn1_modules==0.4.2 61 | pydantic==2.11.3 62 | pydantic-settings==2.8.1 63 | pydantic_core==2.33.1 64 | pyee==12.1.1 65 | Pygments==2.19.1 66 | pyparsing==3.2.3 67 | PySocks==1.7.1 68 | python-dateutil==2.9.0.post0 69 | python-dotenv==1.1.0 70 | pytz==2025.2 71 | PyYAML==6.0.2 72 | Quart==0.20.0 73 | requests==2.32.3 74 | rich==14.0.0 75 | rsa==4.9.1 76 | screeninfo==0.8.1 77 | shellingham==1.5.4 78 | six==1.17.0 79 | sniffio==1.3.1 80 | tenacity==9.1.2 81 | tqdm==4.67.1 82 | typer==0.15.2 83 | typing-inspection==0.4.0 84 | typing_extensions==4.13.1 85 | ua-parser==1.0.1 86 | ua-parser-builtins==0.18.0.post1 87 | urllib3==2.3.0 88 | websockets==15.0.1 89 | Werkzeug==3.1.3 90 | wsproto==1.2.0 91 | yarl==1.19.0 92 | -------------------------------------------------------------------------------- /client/run_client.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from core.ws_client import worker_main 3 | 4 | if __name__ == "__main__": 5 | asyncio.run(worker_main()) 6 | -------------------------------------------------------------------------------- /client/task_handlers/AntiTurnstileTaskProxyLess.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import gc 3 | import json 4 | import logging 5 | import os 6 | import resource 7 | import sys 8 | import time 9 | from typing import Optional 10 | 11 | import yaml 12 | from camoufox.async_api import AsyncCamoufox 13 | # from patchright.async_api import async_playwright 14 | from common.logger import get_logger,emoji 15 | from dataclasses import dataclass 16 | logger = get_logger("Anti") 17 | with open("config/config.yaml", "r") as f: 18 | config = yaml.safe_load(f) 19 | @dataclass 20 | class TurnstileResult: 21 | turnstile_value: Optional[str] 22 | elapsed_time_seconds: float 23 | status: str 24 | reason: Optional[str] = None 25 | 26 | class TurnstileSolver: 27 | HTML_TEMPLATE = """ 28 | 29 | 30 | 31 | 32 | 33 | Turnstile Solver 34 | 35 | 48 | 49 | 50 | 51 |

Fetching your IP...

52 | 53 | 54 | """ 55 | 56 | def __init__(self, debug: bool = False, headless: Optional[bool] = False, useragent: Optional[str] = None, browser_type: str = "chromium"): 57 | self.debug = debug 58 | self.browser_type = browser_type 59 | self.headless = headless 60 | self.useragent = useragent 61 | self.browser_args = [] 62 | if useragent: 63 | self.browser_args.append(f"--user-agent={useragent}") 64 | 65 | async def _setup_page(self, browser, url: str, sitekey: str, action: str = None, cdata: str = None): 66 | if self.browser_type == "chrome": 67 | page = browser.pages[0] 68 | else: 69 | page = await browser.new_page() 70 | 71 | url_with_slash = url + "/" if not url.endswith("/") else url 72 | 73 | if self.debug: 74 | logger.debug(f"Navigating to URL: {url_with_slash}") 75 | 76 | turnstile_div = f'
' 77 | page_data = self.HTML_TEMPLATE.replace("", turnstile_div) 78 | 79 | await page.route(url_with_slash, lambda route: route.fulfill(body=page_data, status=200)) 80 | await page.goto(url_with_slash) 81 | 82 | return page, url_with_slash 83 | 84 | async def _get_turnstile_response(self, page, max_attempts: int = 10) -> Optional[str]: 85 | for attempt in range(max_attempts): 86 | if self.debug: 87 | logger.debug(f"Attempt {attempt + 1}: No Turnstile response yet.") 88 | 89 | try: 90 | turnstile_check = await page.input_value("[name=cf-turnstile-response]") 91 | if turnstile_check == "": 92 | await page.click("//div[@class='cf-turnstile']", timeout=3000) 93 | await asyncio.sleep(3) 94 | else: 95 | return turnstile_check 96 | except Exception as e: 97 | logger.debug(f"Click error: {str(e)}") 98 | continue 99 | return None 100 | 101 | async def solve(self, proxy:json,url: str, sitekey: str, action: str = None, cdata: str = None): 102 | start_time = time.time() 103 | logger.debug(f"Attempting to solve URL: {url}") 104 | proxy_config = config.get("proxy") or {} 105 | # proxy = { 106 | # "server": proxy_config.get("server"), 107 | # "username": proxy_config.get("username"), 108 | # "password": proxy_config.get("password"), 109 | # } 110 | logger.debug(f"Proxy: {proxy},type:{type(proxy)}") 111 | async with AsyncCamoufox( 112 | headless=self.headless, 113 | geoip=True, 114 | proxy=proxy, 115 | ) as browser: 116 | try: 117 | page,url_with_slash = await self._setup_page(browser, url, sitekey, action, cdata) 118 | token = await self._get_turnstile_response(page) 119 | elapsed = round(time.time() - start_time, 2) 120 | 121 | if not token: 122 | logger.error("Failed to retrieve Turnstile value.") 123 | return TurnstileResult(None, elapsed, "failure", "No token obtained") 124 | 125 | logger.info(emoji("SUCCESS", f"Solved Turnstile in {elapsed}s -> {token[:10]}...")) 126 | return TurnstileResult(token, elapsed, "success") 127 | except Exception as e: 128 | logger.error(emoji("ERROR", f"Failed to solve Turnstile: {str(e)}")) 129 | return TurnstileResult(None, elapsed, "failure", str(e)) 130 | finally: 131 | await browser.close() 132 | # 强制垃圾回收 133 | gc.collect() 134 | # 打印内存使用情况 135 | rss_kb = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss 136 | rss_mb = rss_kb / 1024 if sys.platform != "darwin" else rss_kb / (1024 * 1024) # macOS单位不同 137 | logger.debug(f"🧠 内存占用: {rss_mb:.2f} MB") 138 | logger.debug(f"对象数量追踪: {len(gc.get_objects())}") 139 | try: 140 | open_fds = len(os.listdir(f'/proc/{os.getpid()}/fd')) 141 | logger.debug(f"📎 打开文件描述符数: {open_fds}") 142 | except Exception: 143 | pass 144 | 145 | async def get_turnstile_token(proxy:json,url: str, sitekey: str, action: str = None, cdata: str = None, debug: bool = False, headless: bool = False, useragent: str = None): 146 | solver = TurnstileSolver(debug=debug, useragent=useragent, headless=headless) 147 | logger.debug(f"solver: {solver}") 148 | result = await solver.solve(proxy=proxy,url=url, sitekey=sitekey, action=action, cdata=cdata) 149 | return result.__dict__ 150 | 151 | async def run(task_data,proxy): 152 | logger.debug(f"task_data: {task_data}") 153 | url = task_data["websiteURL"] 154 | sitekey = task_data["websiteKey"] 155 | action = task_data.get("metadata", {}).get("action") 156 | logger.debug(f"action: {sitekey}") 157 | headless_str = config.get("camoufox").get("headless", "true") 158 | headless = headless_str.lower() == "true" 159 | logger.debug(f"headless: {headless}") 160 | res = await get_turnstile_token( 161 | proxy=proxy, 162 | url=url, 163 | sitekey=sitekey, 164 | action=None, 165 | cdata=None, 166 | debug=False, 167 | headless=headless, 168 | useragent=None, 169 | ) 170 | return { 171 | "token": res["turnstile_value"], 172 | "elapsed": res["elapsed_time_seconds"], 173 | "status": "success" if res["turnstile_value"] else "failure", 174 | "type": "turnstile" 175 | } -------------------------------------------------------------------------------- /client/task_handlers/HcaptchaCracker.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import gc 3 | import json 4 | import os 5 | import resource 6 | import sys 7 | import time 8 | 9 | import yaml 10 | from camoufox.async_api import AsyncCamoufox 11 | from hcaptcha_challenger.agent import AgentV, AgentConfig 12 | from hcaptcha_challenger.models import CaptchaResponse, ChallengeSignal 13 | from hcaptcha_challenger.utils import SiteKey 14 | from common.logger import get_logger,emoji 15 | 16 | logger = get_logger("HCaptcha") 17 | 18 | with open("config/config.yaml", "r") as f: 19 | config = yaml.safe_load(f) 20 | # gemini_key = config.get("apikey").get("gemini_api_key") 21 | # models = config.get("models") 22 | headless_str = config.get("camoufox").get("headless", "true") 23 | headless = headless_str.lower() == "true" 24 | # if gemini_key: 25 | # os.environ["GEMINI_API_KEY"] = gemini_key 26 | # else: 27 | # raise RuntimeError("config.yaml 缺少 gemini_api_key") 28 | 29 | async def run(task_data, proxy): 30 | url = task_data["websiteURL"] 31 | sitekey = task_data["websiteKey"] 32 | print(task_data) 33 | gemini_key = task_data["clientKey"] 34 | action = task_data.get("metadata", {}).get("action", "") 35 | cdata = task_data.get("metadata", {}).get("cdata", "") 36 | 37 | logger.debug(f"🌐 Preparing hCaptcha page at {url}") 38 | start_time = time.time() 39 | async with AsyncCamoufox( 40 | headless=headless, 41 | proxy=proxy, 42 | geoip=True, 43 | args=["--lang=en-US", "--accept-language=en-US,en;q=0.9"] 44 | ) as browser: 45 | try: 46 | page = await browser.new_page() 47 | await page.goto(SiteKey.as_site_link(sitekey)) 48 | 49 | # 初始化 Agent 50 | agent_config = AgentConfig( 51 | GEMINI_API_KEY=gemini_key, 52 | EXECUTION_TIMEOUT = 300, 53 | RESPONSE_TIMEOUT = 30, 54 | RETRY_ON_FAILURE = True, 55 | # CHALLENGE_CLASSIFIER_MODEL=models['CHALLENGE_CLASSIFIER_MODEL'], 56 | # IMAGE_CLASSIFIER_MODEL=models['IMAGE_CLASSIFIER_MODEL'], 57 | # SPATIAL_POINT_REASONER_MODEL=models['SPATIAL_POINT_REASONER_MODEL'], 58 | # SPATIAL_PATH_REASONER_MODEL=models['SPATIAL_PATH_REASONER_MODEL'], 59 | ) 60 | agent = AgentV(page=page, agent_config=agent_config) 61 | 62 | await agent.robotic_arm.click_checkbox() 63 | 64 | # 执行挑战并等待结果 65 | await agent.wait_for_challenge() 66 | elapsed = round(time.time() - start_time, 2) 67 | if agent.cr_list: 68 | cr = agent.cr_list[-1] 69 | cr_data = cr.model_dump() 70 | logger.debug(cr_data) 71 | token = cr_data["generated_pass_UUID"] if cr_data.get("is_pass") else None 72 | logger.info(emoji("SUCCESS", f"Solved Hcaptcha in {elapsed}s -> {token[:10]}...")) 73 | return { 74 | "token": token, 75 | "elapsed": cr_data.get("expiration", 0), 76 | "status": "success" if cr_data.get("is_pass") else "failure", 77 | "type": "hcaptcha" 78 | } 79 | else: 80 | return { 81 | "token": None, 82 | "elapsed": 0, 83 | "status": "failure", 84 | "type": "hcaptcha" 85 | } 86 | 87 | except Exception as e: 88 | logger.error(emoji("ERROR", f"Failed to solve Hcaptcha: {str(e)}")) 89 | return { 90 | "token": None, 91 | "elapsed": 0, 92 | "status": "failure", 93 | "type": "hcaptcha" 94 | } 95 | finally: 96 | await page.close() 97 | # 强制垃圾回收 98 | gc.collect() 99 | # 打印内存使用情况 100 | rss_kb = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss 101 | rss_mb = rss_kb / 1024 if sys.platform != "darwin" else rss_kb / (1024 * 1024) # macOS单位不同 102 | logger.debug(f"🧠 内存占用: {rss_mb:.2f} MB") 103 | logger.debug(f"对象数量追踪: {len(gc.get_objects())}") 104 | try: 105 | open_fds = len(os.listdir(f'/proc/{os.getpid()}/fd')) 106 | logger.debug(f"📎 打开文件描述符数: {open_fds}") 107 | except Exception: 108 | pass 109 | # if __name__ == "__main__": 110 | # task_data = { 111 | # "websiteURL": "https://faucet.n1stake.com/", 112 | # "websiteKey": "d0ba98cc-0528-41a0-98fe-dc66945e5416" 113 | # } 114 | # proxy = { 115 | # "server": "http://pr-sg.ip2world.com:6001", 116 | # "username": "capsolver-zone-resi-region-hk", 117 | # "password": "123456" 118 | # } 119 | # 120 | # token = asyncio.run(run(task_data, proxy)) 121 | # print(token) -------------------------------------------------------------------------------- /client/task_handlers/test.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import json 3 | 4 | from playwright.async_api import async_playwright, Page 5 | 6 | from hcaptcha_challenger.agent import AgentV, AgentConfig 7 | from hcaptcha_challenger.models import CaptchaResponse 8 | from hcaptcha_challenger.utils import SiteKey 9 | 10 | 11 | async def challenge(page: Page) -> AgentV: 12 | """Automates the process of solving an hCaptcha challenge.""" 13 | # Initialize the agent configuration with API key (from parameters or environment) 14 | agent_config = AgentConfig() 15 | 16 | # Create an agent instance with the page and configuration 17 | # AgentV appears to be a specialized agent for visual challenges 18 | agent = AgentV(page=page, agent_config=agent_config) 19 | 20 | # Click the hCaptcha checkbox to initiate the challenge 21 | # The robotic_arm is an abstraction for performing UI interactions 22 | await agent.robotic_arm.click_checkbox() 23 | 24 | # Wait for the challenge to appear and be ready for solving 25 | # This may involve waiting for images to load or instructions to appear 26 | await agent.wait_for_challenge() 27 | 28 | # Note: The code ends here, suggesting this is part of a larger solution 29 | # that would continue with challenge solving steps after this point 30 | return agent 31 | 32 | 33 | async def main(): 34 | async with async_playwright() as p: 35 | browser = await p.chromium.launch(headless=False) 36 | context = await browser.new_context() 37 | 38 | # Create a new page in the provided browser context 39 | page = await context.new_page() 40 | 41 | # Navigate to the hCaptcha test page using a predefined site key 42 | # SiteKey.user_easy likely refers to a test/demo hCaptcha with lower difficulty 43 | # await page.goto(SiteKey.as_site_link(SiteKey.discord)) 44 | await page.goto(SiteKey.as_site_link(SiteKey.user_easy)) 45 | 46 | # --- When you encounter hCaptcha in your workflow --- 47 | agent = await challenge(page) 48 | if agent.cr_list: 49 | cr: CaptchaResponse = agent.cr_list[-1] 50 | print(json.dumps(cr.model_dump(by_alias=True), indent=2, ensure_ascii=False)) 51 | 52 | 53 | if __name__ == "__main__": 54 | asyncio.run(main()) 55 | -------------------------------------------------------------------------------- /client/test.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from task_handlers.HcaptchaCracker import run 3 | 4 | async def main(): 5 | proxy = { 6 | "server": "http://capsolver-zone-resi-region-hk:123456@pr-sg.ip2world.com:6001" 7 | 8 | } 9 | task = { 10 | "websiteURL": "https://accounts.hcaptcha.com/demo", 11 | "websiteKey": "00000000-0000-0000-0000-000000000000", # 替换为真实 sitekey 12 | "metadata": { 13 | "label": "bus" 14 | } 15 | } 16 | result = await run(task, proxy) 17 | print(result) 18 | 19 | if __name__ == "__main__": 20 | asyncio.run(main()) -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.9" 2 | 3 | services: 4 | frontend: 5 | container_name: frontend 6 | build: 7 | context: ./frontend 8 | dockerfile: Dockerfile 9 | ports: 10 | - "8080:8080" 11 | depends_on: 12 | - backend 13 | networks: 14 | - brush-net 15 | 16 | backend: 17 | container_name: backend 18 | build: 19 | context: ./backend 20 | dockerfile: Dockerfile 21 | ports: 22 | - "8000:8000" 23 | restart: always 24 | networks: 25 | - brush-net 26 | 27 | networks: 28 | brush-net: 29 | external: true -------------------------------------------------------------------------------- /frontend/Dockerfile: -------------------------------------------------------------------------------- 1 | # 构建阶段 2 | FROM node:18 AS builder 3 | WORKDIR /app 4 | COPY . . 5 | RUN npm install && npm run build 6 | 7 | # 部署阶段:用 nginx 托管构建产物 8 | FROM nginx:alpine 9 | COPY --from=builder /app/build /usr/share/nginx/html 10 | COPY nginx.conf /etc/nginx/conf.d/default.conf 11 | COPY ssl /app/ssl 12 | EXPOSE 8080 13 | CMD ["nginx", "-g", "daemon off;"] -------------------------------------------------------------------------------- /frontend/README.md: -------------------------------------------------------------------------------- 1 | # 🧩 监控面板 2 | 3 | 监控系统内所有节点和任务状态 4 | 5 | ## 📁 目录结构 6 | 7 | - src/ 8 | - index.js 9 | - App.jsx 10 | - api.js 11 | - components/ 12 | - TaskTable.jsx # 任务管理 13 | - WorkerTable.jsx # worker管理 14 | - pages/ 15 | - Login.jsx # 登陆页面,默认账号密码都是admin 16 | 17 | 18 | ## 🚀 启动方式 19 | 20 | ```bash 21 | npm install 22 | npm run start 23 | ``` -------------------------------------------------------------------------------- /frontend/common/logger.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | from logging.handlers import RotatingFileHandler 4 | 5 | LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper() 6 | LOG_DIR = os.getenv("LOG_DIR", "./logs") 7 | os.makedirs(LOG_DIR, exist_ok=True) 8 | 9 | def get_logger(name: str) -> logging.Logger: 10 | logger = logging.getLogger(name) 11 | if logger.hasHandlers(): 12 | return logger 13 | 14 | logger.setLevel(LOG_LEVEL) 15 | 16 | formatter = logging.Formatter("[%(asctime)s] [%(levelname)s] [%(name)s] %(message)s") 17 | 18 | # 控制台输出 19 | ch = logging.StreamHandler() 20 | ch.setFormatter(formatter) 21 | logger.addHandler(ch) 22 | 23 | # 文件输出(按模块名称区分) 24 | fh = RotatingFileHandler(f"{LOG_DIR}/{name}.log", maxBytes=10 * 1024 * 1024, backupCount=3) 25 | fh.setFormatter(formatter) 26 | logger.addHandler(fh) 27 | 28 | return logger 29 | # common/emoji_log.py 30 | def emoji(level: str, message: str) -> str: 31 | tags = { 32 | "DEBUG": "🐞", 33 | "INFO": "ℹ️", 34 | "SUCCESS": "✅", 35 | "WARNING": "⚠️", 36 | "ERROR": "❌", 37 | "CRITICAL": "🔥", 38 | "TASK": "📌", 39 | "STARTUP": "🚀", 40 | "SHUTDOWN": "🛑", 41 | "NETWORK": "🌐", 42 | "DB": "🗃️" 43 | } 44 | return f"{tags.get(level.upper(), '')} {message}" -------------------------------------------------------------------------------- /frontend/nginx.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 8080; 3 | server_name localhost; 4 | 5 | # 静态资源 6 | location / { 7 | root /usr/share/nginx/html; 8 | index index.html; 9 | try_files $uri $uri/ /index.html; 10 | } 11 | 12 | # API 反向代理 13 | location /api/ { 14 | proxy_pass http://backend:8000/; 15 | proxy_http_version 1.1; 16 | proxy_set_header Host $host; 17 | proxy_set_header X-Real-IP $remote_addr; 18 | } 19 | location /ws/ { 20 | proxy_pass http://backend:8000/; 21 | proxy_http_version 1.1; 22 | proxy_set_header Upgrade $http_upgrade; 23 | proxy_set_header Connection "upgrade"; 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /frontend/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "frontend-dashboard-auth", 3 | "version": "1.0.0", 4 | "private": true, 5 | "dependencies": { 6 | "antd": "^5.13.5", 7 | "axios": "^1.6.8", 8 | "md5": "^2.3.0", 9 | "react": "^18.2.0", 10 | "react-dom": "^18.2.0", 11 | "react-router-dom": "^6.23.0", 12 | "react-scripts": "5.0.1" 13 | }, 14 | "scripts": { 15 | "start": "react-scripts start", 16 | "build": "react-scripts build" 17 | }, 18 | "browserslist": { 19 | "production": [ 20 | ">0.2%", 21 | "not dead", 22 | "not op_mini all" 23 | ], 24 | "development": [ 25 | "last 1 chrome version", 26 | "last 1 firefox version", 27 | "last 1 safari version" 28 | ] 29 | } 30 | } 31 | -------------------------------------------------------------------------------- /frontend/public/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | Capsolver Dashboard 7 | 8 | 9 |
10 | 11 | 12 | -------------------------------------------------------------------------------- /frontend/src/App.jsx: -------------------------------------------------------------------------------- 1 | import React, { useEffect, useState } from "react"; 2 | import { 3 | Layout, 4 | Typography, 5 | Divider, 6 | Alert, 7 | Switch, 8 | Button, 9 | ConfigProvider, 10 | theme, 11 | } from "antd"; 12 | import { useNavigate } from "react-router-dom"; 13 | import WorkerTable from "./components/WorkerTable"; 14 | import TaskTable from "./components/TaskTable"; 15 | import { fetchNodes, fetchTasks } from "./api"; 16 | 17 | const { Title } = Typography; 18 | const { Header, Content } = Layout; 19 | const { defaultAlgorithm, darkAlgorithm } = theme; 20 | 21 | export default function App() { 22 | const prefersDark = window.matchMedia?.("(prefers-color-scheme: dark)").matches; 23 | const [isDark, setIsDark] = useState(() => { 24 | const saved = localStorage.getItem("theme"); 25 | return saved ? saved === "dark" : prefersDark; 26 | }); 27 | 28 | const [workers, setWorkers] = useState([]); 29 | const [tasks, setTasks] = useState([]); 30 | const [taskSummary, setTaskSummary] = useState({}); 31 | const [hasError, setHasError] = useState(false); 32 | const navigate = useNavigate(); 33 | 34 | const load = async () => { 35 | try { 36 | const [nodeRes, taskRes] = await Promise.all([fetchNodes(), fetchTasks()]); 37 | setWorkers(Array.isArray(nodeRes.data) ? nodeRes.data : []); 38 | setTasks(Array.isArray(taskRes.data?.tasks) ? taskRes.data.tasks : []); 39 | setTaskSummary(taskRes.data?.summary || {}); 40 | setHasError(false); 41 | } catch (err) { 42 | console.error("❌ 数据加载失败:", err.message); 43 | setWorkers([]); 44 | setTasks([]); 45 | setTaskSummary({}); 46 | setHasError(true); 47 | } 48 | }; 49 | 50 | useEffect(() => { 51 | const token = localStorage.getItem("token"); 52 | if (token !== "21232f297a57a5a743894a0e4a801fc3") { 53 | navigate("/login"); 54 | } 55 | load(); 56 | const interval = setInterval(load, 30000); 57 | return () => clearInterval(interval); 58 | }, [navigate]); 59 | 60 | const handleThemeSwitch = (checked) => { 61 | setIsDark(checked); 62 | localStorage.setItem("theme", checked ? "dark" : "light"); 63 | }; 64 | 65 | return ( 66 | 67 | 68 |
76 | 77 | Capsolver 节点监控 78 | 79 |
80 | 87 | 97 |
98 |
99 | 100 | 101 | {hasError && ( 102 | 108 | )} 109 | 110 | 节点状态 111 | 112 | 113 | 任务队列 114 | 115 | 116 |
117 |
118 | ); 119 | } -------------------------------------------------------------------------------- /frontend/src/api.js: -------------------------------------------------------------------------------- 1 | // frontend/src/api.js 2 | import axios from "axios"; 3 | 4 | export const fetchNodes = () => axios.get("/api/nodes"); 5 | export const fetchTasks = () => axios.get("/api/tasks"); -------------------------------------------------------------------------------- /frontend/src/components/TaskTable.jsx: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | import { Table, Tag } from "antd"; 3 | 4 | const columns = [ 5 | { title: "任务ID", dataIndex: "taskId" }, 6 | { title: "状态", dataIndex: "status", render: (text) => { 7 | const color = text === "done" ? "green" : text === "waiting" ? "orange" : "blue"; 8 | return {text}; 9 | }}, 10 | { title: "类型", dataIndex: "type" }, 11 | { title: "分配给", dataIndex: "assignedTo" }, 12 | { title: "创建时间", dataIndex: "createdAt" } 13 | ]; 14 | 15 | export default function TaskTable({ data }) { 16 | return ; 17 | } 18 | -------------------------------------------------------------------------------- /frontend/src/components/WorkerTable.jsx: -------------------------------------------------------------------------------- 1 | import React, { useEffect, useState } from "react"; 2 | import { Table, Tag, Row, Col, Card } from "antd"; 3 | 4 | const columns = [ 5 | { title: "ID", dataIndex: "id" }, 6 | { title: "IP", dataIndex: "ip" }, 7 | { 8 | title: "状态", 9 | dataIndex: "status", 10 | render: (text) => { 11 | const color = text === "空闲" ? "green" : text === "忙碌" ? "red" : "orange"; 12 | return {text}; 13 | } 14 | }, 15 | { 16 | title:
容量 / 总任务 / 排队
, 17 | render: (_, r) => { 18 | const total = r.max_concurrency; 19 | const running = r.pending_tasks; 20 | const pending = r.current_tasks - r.pending_tasks; 21 | const current = running + pending; 22 | const isOverload = current > total; 23 | 24 | const blocks = []; 25 | 26 | for (let i = 0; i < Math.max(total, current); i++) { 27 | let color = "#e5e5ea"; // 空闲灰(浅) 28 | 29 | if (i < current) { 30 | color = i < running ? "#5ac8fa" : "#ffd60a"; 31 | if (isOverload) color = "#ff3b30"; 32 | } 33 | 34 | blocks.push( 35 |
45 | ); 46 | } 47 | 48 | return ( 49 |
50 |
60 | {blocks} 61 |
62 |
63 | {total}/{current}/{pending} 64 |
65 |
66 | ); 67 | } 68 | }, 69 | { 70 | title: "任务类型", 71 | dataIndex: "task_types", 72 | render: (tags) => 73 | Array.isArray(tags) 74 | ? tags.map((t) => {t}) 75 | : null 76 | }, 77 | { title: "上线时间", dataIndex: "connected_at" }, 78 | { title: "Uptime", dataIndex: "uptime" } 79 | ]; 80 | 81 | export default function WorkerTable({ data, taskSummary }) { 82 | const { total, completed, pending, assigned } = taskSummary || {}; 83 | 84 | return ( 85 |
86 | {/* 任务统计区 */} 87 | 88 |
89 | 90 |
{total ?? "-"}
91 |
92 | 93 | 94 | 95 |
{completed ?? "-"}
96 |
97 | 98 | 99 | 100 |
{pending ?? "-"}
101 |
102 | 103 | 104 | 105 |
{assigned ?? "-"}
106 |
107 | 108 | 109 | 110 | {/* 节点表格 */} 111 |
119 | 120 | ); 121 | } -------------------------------------------------------------------------------- /frontend/src/index.js: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | import ReactDOM from "react-dom/client"; 3 | import App from "./App"; 4 | import Login from "./pages/Login"; 5 | import { BrowserRouter, Routes, Route } from "react-router-dom"; 6 | import "antd/dist/reset.css"; 7 | 8 | const root = ReactDOM.createRoot(document.getElementById("root")); 9 | root.render( 10 | 11 | 12 | window.location.href = "/"} />} /> 13 | } /> 14 | 15 | 16 | ); 17 | -------------------------------------------------------------------------------- /frontend/src/pages/Login.jsx: -------------------------------------------------------------------------------- 1 | import React, { useState } from "react"; 2 | import { Button, Input, Card, message } from "antd"; 3 | import md5 from "md5"; 4 | 5 | export default function Login({ onLogin }) { 6 | const [username, setUsername] = useState(""); 7 | const [password, setPassword] = useState(""); 8 | 9 | const handleSubmit = () => { 10 | const hash = md5(password); 11 | if (username === "admin" && hash === "21232f297a57a5a743894a0e4a801fc3") { 12 | localStorage.setItem("token", hash); 13 | onLogin(); 14 | } else { 15 | message.error("用户名或密码错误"); 16 | } 17 | }; 18 | 19 | return ( 20 | 21 | setUsername(e.target.value)} style={{ marginBottom: 8 }} /> 22 | setPassword(e.target.value)} style={{ marginBottom: 8 }} /> 23 | 24 | 25 | ); 26 | } 27 | -------------------------------------------------------------------------------- /img.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Brush-Bot/brush-captcha/40105f55139a486881dc0b203fd0240efb93f067/img.png -------------------------------------------------------------------------------- /install_server_and_frontend.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "=== 一键初始化安装脚本 ===" 4 | sedi() { 5 | if [[ "$(uname)" == "Darwin" ]]; then 6 | sed -i "" "$@" 7 | else 8 | sed -i "$@" 9 | fi 10 | } 11 | echo "📁 检查并复制代理文件:tmp/proxies.txt → backend/proxy/proxies.txt" 12 | if [[ -f tmp/proxies.txt ]]; then 13 | mkdir -p backend/proxy 14 | cp tmp/proxies.txt backend/proxy/proxies.txt 15 | echo "✅ 已复制 proxies.txt" 16 | else 17 | echo "❌ 未找到 tmp/proxies.txt,请先准备代理列表!" 18 | exit 1 19 | fi 20 | echo "📁 检查并复制用户key文件:tmp/user_keys.txt → backend/user/user_keys.txt" 21 | if [[ -f tmp/user_keys.txt ]]; then 22 | mkdir -p backend/user 23 | cp tmp/user_keys.txt backend/user/user_keys.txt 24 | echo "✅ 已复制 user_keys.txt" 25 | else 26 | echo "❌ 未找到 tmp/user_keys.txt,请先准备用户key列表!" 27 | exit 1 28 | fi 29 | 30 | BASE_API_URL="http://backend:8000" 31 | SSL_MODE="off" 32 | 33 | # 是否使用 SSL 34 | read -p "是否启用 SSL?[y/N]: " use_ssl 35 | use_ssl=${use_ssl:-N} 36 | 37 | if [[ "$use_ssl" =~ ^[Yy]$ ]]; then 38 | echo "🔍 检查 tmp/ 下的 SSL 证书文件..." 39 | 40 | mkdir -p frontend/ssl 41 | 42 | crt_file=$(find tmp/ -type f -name "*.crt" | head -n1) 43 | key_file=$(find tmp/ -type f -name "*.key" | head -n1) 44 | pem_files=($(find tmp/ -type f -name "*.pem")) 45 | 46 | if [[ -n "$crt_file" && -n "$key_file" ]]; then 47 | cp "$crt_file" frontend/ssl/server.crt 48 | cp "$key_file" frontend/ssl/server.key 49 | echo "✅ 使用现有 .crt 和 .key 文件" 50 | elif [[ ${#pem_files[@]} -eq 1 ]]; then 51 | echo "🔧 检测到 1 个 PEM 文件,尝试拆分证书和密钥..." 52 | openssl x509 -in "${pem_files[0]}" -out frontend/ssl/server.crt -outform PEM 53 | openssl pkey -in "${pem_files[0]}" -out frontend/ssl/server.key 54 | if [[ $? -ne 0 ]]; then 55 | echo "❌ PEM 拆分失败,请检查格式" 56 | exit 1 57 | fi 58 | echo "✅ 成功从 PEM 拆出证书和密钥" 59 | elif [[ ${#pem_files[@]} -ge 2 ]]; then 60 | echo " 🧩 检测到多个 PEM 文件,请选择哪个是证书:" 61 | select crt_path in "${pem_files[@]}"; do 62 | [[ -n "$crt_path" && "$crt_path" != "$key_path" ]] && break 63 | done 64 | echo "🔍 检测到多个 PEM 文件,请选择哪个是密钥:" 65 | select key_path in "${pem_files[@]}"; do 66 | [[ -n "$key_path" ]] && break 67 | done 68 | cp "$crt_path" frontend/ssl/server.crt 69 | cp "$key_path" frontend/ssl/server.key 70 | echo "✅ 已复制用户指定的 PEM 文件" 71 | else 72 | echo "❌ 未找到证书文件(.crt/.key/.pem)" 73 | exit 1 74 | fi 75 | 76 | BASE_API_URL="https://backend:8000" 77 | SSL_MODE="on" 78 | else 79 | BASE_API_URL="http://backend:8000" 80 | SSL_MODE="off" 81 | fi 82 | 83 | # 写入 .env 84 | echo "BASE_API_URL=$BASE_API_URL" > .env 85 | echo "DOCKER_API_URL=http://backend:8000" >> .env 86 | echo "✅ 已写入 .env" 87 | 88 | # 替换 nginx.conf 89 | if [[ "$SSL_MODE" == "on" ]]; then 90 | nginx_template="tmp/nginx.ssl.template" 91 | else 92 | nginx_template="tmp/nginx.conf.template" 93 | fi 94 | if [[ ! -f "$nginx_template" ]]; then 95 | echo "❌ 找不到 nginx 模板文件: $nginx_template" 96 | exit 1 97 | fi 98 | cp "$nginx_template" frontend/nginx.conf 99 | 100 | 101 | # 容器内访问地址 102 | if [[ "$SSL_MODE" == "on" ]]; then 103 | final_wss_url="wss://backend:8000/worker/" 104 | else 105 | final_wss_url="ws://backend:8000/worker/" 106 | fi 107 | # 创建 brush-net 网络(如不存在) 108 | docker network inspect brush-net >/dev/null 2>&1 || docker network create brush-net 109 | 110 | # 启动容器(刷上网络) 111 | echo "🚀 正在启动容器..." 112 | docker compose --env-file .env up -d --remove-orphans 113 | 114 | echo "✅ 容器启动完成!" 115 | 116 | # 提示信息 117 | echo 118 | echo "管理后台地址:ip:8080" 119 | echo "api 地址:ip:8080/api" 120 | echo "wss 地址:ip:8080/ws/worker/" 121 | echo "记得加协议头" 122 | echo -------------------------------------------------------------------------------- /tmp/nginx.conf.template: -------------------------------------------------------------------------------- 1 | server { 2 | listen 8080; 3 | server_name localhost; 4 | 5 | # 静态资源 6 | location / { 7 | root /usr/share/nginx/html; 8 | index index.html; 9 | try_files $uri $uri/ /index.html; 10 | } 11 | 12 | # API 反向代理 13 | location /api/ { 14 | proxy_pass http://backend:8000/; 15 | proxy_http_version 1.1; 16 | proxy_set_header Host $host; 17 | proxy_set_header X-Real-IP $remote_addr; 18 | } 19 | location /ws/ { 20 | proxy_pass http://backend:8000/; 21 | proxy_http_version 1.1; 22 | proxy_set_header Upgrade $http_upgrade; 23 | proxy_set_header Connection "upgrade"; 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /tmp/nginx.ssl.template: -------------------------------------------------------------------------------- 1 | server { 2 | listen 8080 ssl; 3 | server_name localhost; 4 | 5 | ssl_certificate /app/ssl/server.crt; 6 | ssl_certificate_key /app/ssl/server.key; 7 | 8 | ssl_protocols TLSv1.2 TLSv1.3; 9 | ssl_ciphers HIGH:!aNULL:!MD5; 10 | 11 | # 静态资源 12 | location / { 13 | root /usr/share/nginx/html; 14 | index index.html; 15 | try_files $uri $uri/ /index.html; 16 | } 17 | 18 | # API 反向代理 19 | location /api/ { 20 | proxy_pass http://backend:8000/; 21 | proxy_http_version 1.1; 22 | proxy_set_header Host $host; 23 | proxy_set_header X-Real-IP $remote_addr; 24 | } 25 | location /ws/ { 26 | proxy_pass http://backend:8000/; 27 | proxy_http_version 1.1; 28 | proxy_set_header Upgrade $http_upgrade; 29 | proxy_set_header Connection "upgrade"; 30 | } 31 | } 32 | --------------------------------------------------------------------------------