├── .gitignore ├── LICENSE ├── README.md ├── README_zh.md ├── app.py ├── lang.py ├── requirements.txt └── styles.css /.gitignore: -------------------------------------------------------------------------------- 1 | # Environment files 2 | .env 3 | .apikey # 同时忽略旧密钥文件 4 | 5 | # Python 6 | __pycache__/ 7 | *.py[cod] 8 | .venv/ -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 Intelligent-Internet 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: "CoT-Lab: Human-AI Co-Thinking Laboratory" 3 | emoji: "🤖" 4 | colorFrom: "blue" 5 | colorTo: "gray" 6 | sdk: "gradio" 7 | python_version: "3.13" 8 | sdk_version: "5.13.1" 9 | app_file: "app.py" 10 | models: 11 | - "deepseek-ai/DeepSeek-R1" 12 | tags: 13 | - "writing-assistant" 14 | - "multilingual" 15 | license: "mit" 16 | --- 17 | 18 | # CoT-Lab: Human-AI Co-Thinking Laboratory 19 | [Huggingface Spaces 🤗](https://huggingface.co/spaces/Intelligent-Internet/CoT-Lab) | [GitHub Repository 🌐](https://github.com/Intelligent-Internet/CoT-Lab-Demo) 20 | [中文README](README_zh.md) 21 | 22 | **Sync your thinking with AI reasoning models to achieve deeper cognitive alignment** 23 | Follow, learn, and iterate the thought within one turn 24 | 25 | ## 🌟 Introduction 26 | CoT-Lab is an experimental interface exploring new paradigms in human-AI collaboration. Based on **Cognitive Load Theory** and **Active Learning** principles, it creates a "**Thought Partner**" relationship by enabling: 27 | 28 | - 🧠 **Cognitive Synchronization** 29 | Slow-paced AI output aligned with human information processing speed 30 | - ✍️ **Collaborative Thought Weaving** 31 | Human active participation in AI's Chain of Thought 32 | 33 | 34 | ** This project is part of ongoing exploration. Under active development, discussion and feedback are welcome! ** 35 | 36 | ## 🛠 Usage Guide 37 | ### Basic Operation 38 | 1. **Set Initial Prompt** 39 | Describe your prompy in the input box (e.g., "Explain quantum computing basics") 40 | 41 | 2. **Adjust Cognitive Parameters** 42 | - ⏱ **Thought Sync Throughput**: tokens/sec - 5:Read-aloud, 10:Follow-along, 50:Skim 43 | - 📏 **Human Thinking Cadence**: Auto-pause every X paragraphs (Default off - recommended for active learning) 44 | 45 | 3. **Interactive Workflow** 46 | - Click `Generate` to start co-thinking, follow the thinking process 47 | - Edit AI's reasoning when it pauses - or pause it anytime with `Shift+Enter` 48 | - Use `Shift+Enter` to hand over to AI again 49 | 50 | ## 🧠 Design Philosophy 51 | - **Cognitive Load Optimization** 52 | Information chunking (Chunking) adapts to working memory limits, serialized information presentation reduces cognitive load from visual searching 53 | 54 | - **Active Learning Enhancement** 55 | Direct manipulation interface promotes deeper cognitive engagement 56 | 57 | - **Distributed Cognition** 58 | Explore hybrid human-AI problem-solving paradiam 59 | 60 | ## 📥 Installation & Deployment 61 | Local deployment is (currently) required if you want to work with locally hosted LLMs. 62 | Due to degraded performance of official DeepSeek API - We recommend seeking alternative API providers, or use locally hosted distilled-R1 for experiment. 63 | 64 | **Prerequisites**: Python 3.11+ | Valid [Deepseek API Key](https://platform.deepseek.com/) or OpenAI SDK compatible API. 65 | 66 | ```bash 67 | # Clone repository 68 | git clone https://github.com/Intelligent-Internet/CoT-Lab-Demo 69 | cd CoT-Lab 70 | 71 | # Install dependencies 72 | pip install -r requirements.txt 73 | 74 | # Configure environment 75 | API_KEY=sk-**** 76 | API_URL=https://api.deepseek.com/beta 77 | API_MODEL=deepseek-reasoner 78 | 79 | # Launch application 80 | python app.py 81 | ``` 82 | 83 | 84 | ## 📄 License 85 | MIT License © 2024 [ii.inc] 86 | 87 | ## Contact 88 | yizhou@ii.inc (Dango233) -------------------------------------------------------------------------------- /README_zh.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: "CoT-Lab: 人机协同思考实验室" 3 | emoji: "🤖" 4 | colorFrom: "blue" 5 | colorTo: "gray" 6 | sdk: "gradio" 7 | python_version: "3.13" 8 | sdk_version: "5.13.1" 9 | app_file: "app.py" 10 | models: 11 | - "deepseek-ai/DeepSeek-R1" 12 | tags: 13 | - "写作助手" 14 | - "多语言" 15 | license: "mit" 16 | --- 17 | 18 | # CoT-Lab: 人机协同思考实验室 19 | [Huggingface空间 🤗](https://huggingface.co/spaces/Intelligent-Internet/CoT-Lab) | [GitHub仓库 🌐](https://github.com/Intelligent-Internet/CoT-Lab-Demo) 20 | [English README](README.md) 21 | 22 | **通过同步人类与AI的思考过程,实现深层次的认知对齐** 23 | 在一轮对话中跟随、学习、迭代思维链 24 | 25 | ## 🌟 项目介绍 26 | CoT-Lab是一个探索人机协作新范式的实验性界面,基于**认知负荷理论**和**主动学习**原则,致力于探索人与AI的"**思考伙伴**"关系。 27 | 28 | - 🧠 **认知同步** 29 | 调节AI输出速度,匹配不同场景下的人类信息处理速度匹配 30 | - ✍️ **思维编织** 31 | 人类主动参与AI的思维链 32 | 33 | ** 探索性实验项目,正在积极开发中,欢迎讨论与反馈! ** 34 | 35 | ## 🛠 使用指南 36 | ### 基本操作 37 | 1. **设置初始提示** 38 | 在输入框描述您的问题(例如"解释量子计算基础") 39 | 40 | 2. **调整认知参数** 41 | - ⏱ **思考同步速度**:词元/秒 - 5:朗读, 10:跟随, 50:跳读 42 | - 📏 **人工思考节奏**:每X段落自动暂停(默认关闭 - 推荐主动学习场景使用) 43 | 44 | 3. **交互工作流** 45 | - 点击`生成`开始协同思考,跟随思考过程 46 | - AI暂停时可编辑推理过程 - 或随时使用`Shift+Enter`暂停AI输出,进入思考、编辑模式 47 | - 思考、编辑后,使用`Shift+Enter`交还控制权给AI 48 | 49 | ## 🧠 设计理念 50 | - **认知负荷优化** 51 | 信息组块化(Chunking)适配工作记忆限制,序列化信息呈现降低视觉搜索带来的认知负荷 52 | 53 | - **主动学习增强** 54 | 直接操作思维链,促进深度认知投入 55 | 56 | - **分布式认知** 57 | 探索人机携作的问题解决范式 58 | 59 | ## 📥 安装部署 60 | 如希望使用本地部署的大语言模型,您(暂时)需要克隆本项目并在本地运行。 61 | 因近期DeepSeek官方API不稳定,我们建议暂时使用第三方API供应商作为替代方案,或者使用本地部署的R1-Distilled模型进行实验。 62 | 63 | **环境要求**:Python 3.11+ | 有效的[Deepseek API密钥](https://platform.deepseek.com/) 或其他OpenAI SDK兼容的API接口。 64 | 65 | ```bash 66 | # 克隆仓库 67 | git clone https://github.com/Intelligent-Internet/CoT-Lab-Demo 68 | cd CoT-Lab 69 | 70 | # 安装依赖 71 | pip install -r requirements.txt 72 | 73 | # 配置环境 74 | API_KEY=sk-**** 75 | API_URL=https://api.deepseek.com/beta 76 | API_MODEL=deepseek-reasoner 77 | 78 | # 启动应用 79 | python app.py 80 | ``` 81 | 82 | ## 📄 许可协议 83 | MIT License © 2024 [ii.inc] 84 | 85 | ## Contact 86 | yizhou@ii.inc (Dango233) -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | from openai import OpenAI 2 | from dotenv import load_dotenv 3 | import os 4 | import threading 5 | import time 6 | import gradio as gr 7 | from lang import LANGUAGE_CONFIG 8 | 9 | # 环境变量预校验 10 | load_dotenv(override=True) 11 | required_env_vars = ["API_KEY", "API_URL", "API_MODEL"] 12 | secondary_api_exists = all( 13 | os.getenv(f"{var}_2") for var in ["API_KEY", "API_URL", "API_MODEL"] 14 | ) 15 | missing_vars = [var for var in required_env_vars if not os.getenv(var)] 16 | if missing_vars: 17 | raise EnvironmentError( 18 | f"Missing required environment variables: {', '.join(missing_vars)}" 19 | ) 20 | 21 | 22 | class AppConfig: 23 | DEFAULT_THROUGHPUT = 10 24 | SYNC_THRESHOLD_DEFAULT = 0 25 | API_TIMEOUT = 20 26 | 27 | 28 | class DynamicState: 29 | """动态UI状态""" 30 | 31 | def __init__(self): 32 | self.should_stream = False 33 | self.stream_completed = False 34 | self.in_cot = True 35 | self.current_language = "en" 36 | self.waiting_api = False # 新增等待状态标志 37 | self.label_passthrough = False 38 | 39 | def control_button_handler(self): 40 | original_state = self.should_stream 41 | self.should_stream = not self.should_stream 42 | 43 | # 当从暂停->生成时激活等待状态 44 | if not original_state and self.should_stream: 45 | self.waiting_api = True 46 | self.stream_completed = False 47 | 48 | return self.ui_state_controller() 49 | 50 | def ui_state_controller(self): 51 | """生成动态UI组件状态""" 52 | # [control_button, thought_editor, reset_button] 53 | lang_data = LANGUAGE_CONFIG[self.current_language] 54 | control_value = ( 55 | lang_data["pause_btn"] if self.should_stream else lang_data["generate_btn"] 56 | ) 57 | control_variant = "secondary" if self.should_stream else "primary" 58 | # 处理等待状态显示 59 | if self.waiting_api and self.should_stream: 60 | status_suffix = lang_data["waiting_api"] 61 | elif self.waiting_api and not self.should_stream: 62 | status_suffix = lang_data["api_retry"] 63 | else: 64 | status_suffix = ( 65 | lang_data["completed"] 66 | if self.stream_completed 67 | else lang_data["interrupted"] 68 | ) 69 | editor_label = f"{lang_data['editor_label']} - {status_suffix}" 70 | output = ( 71 | gr.update(value=control_value, variant=control_variant), 72 | gr.update() if self.label_passthrough else gr.update(label=editor_label), 73 | gr.update(interactive=not self.should_stream), 74 | ) 75 | self.label_passthrough = False 76 | return output 77 | 78 | def reset_workspace(self): 79 | """重置工作区状态""" 80 | self.stream_completed = False 81 | self.should_stream = False 82 | self.in_cot = True 83 | self.waiting_api = False 84 | 85 | return self.ui_state_controller() + ( 86 | "", 87 | "", 88 | LANGUAGE_CONFIG["en"]["bot_default"], 89 | DEFAULT_PERSISTENT, 90 | ) 91 | 92 | 93 | class CoordinationManager: 94 | """管理人类与AI的协同节奏""" 95 | 96 | def __init__(self, paragraph_threshold, initial_content): 97 | self.paragraph_threshold = paragraph_threshold 98 | self.initial_paragraph_count = initial_content.count("\n\n") 99 | self.triggered = False 100 | 101 | def should_pause_for_human(self, current_content): 102 | if self.paragraph_threshold <= 0 or self.triggered: 103 | return False 104 | 105 | current_paragraphs = current_content.count("\n\n") 106 | if ( 107 | current_paragraphs - self.initial_paragraph_count 108 | >= self.paragraph_threshold 109 | ): 110 | self.triggered = True 111 | return True 112 | return False 113 | 114 | 115 | class ConvoState: 116 | """State of current ROUND of convo""" 117 | 118 | def __init__(self): 119 | self.throughput = AppConfig.DEFAULT_THROUGHPUT 120 | self.sync_threshold = AppConfig.SYNC_THRESHOLD_DEFAULT 121 | self.current_language = "en" 122 | self.convo = [] 123 | self.initialize_new_round() 124 | self.is_error = False 125 | self.result_editing_toggle = False 126 | self.is_seperate_reasoning = False 127 | self.in_seperate_reasoning = False 128 | 129 | def get_api_config(self, language): 130 | suffix = "_2" if language == "zh" and secondary_api_exists else "" 131 | return { 132 | "key": os.getenv(f"API_KEY{suffix}"), 133 | "url": os.getenv(f"API_URL{suffix}"), 134 | "model": os.getenv(f"API_MODEL{suffix}"), 135 | } 136 | 137 | def initialize_new_round(self): 138 | self.current = {} 139 | self.current["user"] = "" 140 | self.current["cot"] = "" 141 | self.current["result"] = "" 142 | self.current["raw"] = "" 143 | self.convo.append(self.current) 144 | 145 | def flatten_output(self): 146 | output = [] 147 | for round in self.convo: 148 | output.append({"role": "user", "content": round["user"]}) 149 | if len(round["cot"]) > 0: 150 | output.append( 151 | { 152 | "role": "assistant", 153 | "content": round["cot"], 154 | "metadata": {"title": f"Chain of Thought"}, 155 | } 156 | ) 157 | if len(round["result"]) > 0: 158 | output.append({"role": "assistant", "content": round["result"]}) 159 | return output 160 | 161 | def generate_ai_response(self, user_prompt, current_content, dynamic_state): 162 | lang_data = LANGUAGE_CONFIG[self.current_language] 163 | dynamic_state.stream_completed = False 164 | full_response = current_content 165 | self.current["raw"] = full_response 166 | api_config = self.get_api_config(self.current_language) 167 | api_client = OpenAI( 168 | api_key=api_config["key"], 169 | base_url=api_config["url"], 170 | timeout=AppConfig.API_TIMEOUT, 171 | ) 172 | 173 | coordinator = CoordinationManager(self.sync_threshold, current_content) 174 | 175 | editor_output = current_content 176 | 177 | try: 178 | 179 | # 初始等待状态更新 180 | if dynamic_state.waiting_api: 181 | status = lang_data["waiting_api"] 182 | editor_label = f"{lang_data['editor_label']} - {status}" 183 | yield full_response, gr.update( 184 | label=editor_label 185 | ), self.flatten_output() 186 | 187 | coordinator = CoordinationManager(self.sync_threshold, current_content) 188 | messages = [ 189 | {"role": "user", "content": user_prompt}, 190 | { 191 | "role": "assistant", 192 | "content": f"\n{current_content}", 193 | "prefix": True, 194 | }, 195 | ] 196 | self.current["user"] = user_prompt 197 | response_stream = api_client.chat.completions.create( 198 | model=os.getenv("API_MODEL"), 199 | messages=messages, 200 | stream=True, 201 | timeout=AppConfig.API_TIMEOUT, 202 | top_p=0.95, 203 | temperature=0.6, 204 | ) 205 | for chunk in response_stream: 206 | print(chunk) 207 | chunk_content = "" 208 | if hasattr(chunk.choices[0].delta, "reasoning_content") and chunk.choices[0].delta.reasoning_content: 209 | chunk_content = chunk.choices[0].delta.reasoning_content 210 | self.is_seperate_reasoning = True 211 | self.in_seperate_reasoning = True 212 | 213 | elif chunk.choices[0].delta.content: 214 | chunk_content = chunk.choices[0].delta.content 215 | if self.in_seperate_reasoning: 216 | chunk_content ="" + chunk_content 217 | self.in_seperate_reasoning = False 218 | if ( 219 | coordinator.should_pause_for_human(full_response) 220 | and dynamic_state.in_cot 221 | ): 222 | dynamic_state.should_stream = False 223 | if not dynamic_state.should_stream: 224 | break 225 | 226 | if chunk_content: 227 | dynamic_state.waiting_api = False 228 | full_response += chunk_content.replace("", "") 229 | self.current["raw"] = full_response 230 | # Update Convo State 231 | think_complete = "" in full_response 232 | dynamic_state.in_cot = not think_complete 233 | if think_complete: 234 | self.current["cot"], self.current["result"] = ( 235 | full_response.split("") 236 | ) 237 | else: 238 | self.current["cot"], self.current["result"] = ( 239 | full_response, 240 | "", 241 | ) 242 | status = ( 243 | lang_data["loading_thinking"] 244 | if dynamic_state.in_cot 245 | else lang_data["loading_output"] 246 | ) 247 | editor_label = f"{lang_data['editor_label']} - {status}" 248 | if self.result_editing_toggle: 249 | editor_output = full_response 250 | else: 251 | editor_output = self.current["cot"] + ( 252 | "" if think_complete else "" 253 | ) 254 | yield editor_output, gr.update( 255 | label=editor_label 256 | ), self.flatten_output() 257 | 258 | interval = 1.0 / self.throughput 259 | start_time = time.time() 260 | while ( 261 | (time.time() - start_time) < interval 262 | and dynamic_state.should_stream 263 | and dynamic_state.in_cot 264 | ): 265 | time.sleep(0.005) 266 | 267 | except Exception as e: 268 | if str(e) == "list index out of range": 269 | dynamic_state.stream_completed = True 270 | else: 271 | if str(e) == "The read operation timed out": 272 | error_msg = lang_data["api_interrupted"] 273 | else: 274 | error_msg = "❓ " + str(e) 275 | # full_response += f"\n\n[{error_msg}: {str(e)}]" 276 | editor_label_error = f"{lang_data['editor_label']} - {error_msg}" 277 | self.is_error = True 278 | dynamic_state.label_passthrough = True 279 | 280 | finally: 281 | dynamic_state.should_stream = False 282 | if "response_stream" in locals(): 283 | response_stream.close() 284 | final_status = ( 285 | lang_data["completed"] 286 | if dynamic_state.stream_completed 287 | else lang_data["interrupted"] 288 | ) 289 | editor_label = f"{lang_data['editor_label']} - {final_status}" 290 | if not self.is_error: 291 | yield editor_output, gr.update( 292 | label=editor_label 293 | ), self.flatten_output() 294 | else: 295 | yield editor_output, gr.update( 296 | label=editor_label_error 297 | ), self.flatten_output() + [ 298 | { 299 | "role": "assistant", 300 | "content": error_msg, 301 | "metadata": {"title": f"❌Error"}, 302 | } 303 | ] 304 | self.is_error = False 305 | 306 | 307 | def update_interface_language(selected_lang, convo_state, dynamic_state): 308 | """更新界面语言配置""" 309 | convo_state.current_language = selected_lang 310 | dynamic_state.current_language = selected_lang 311 | lang_data = LANGUAGE_CONFIG[selected_lang] 312 | base_editor_label = lang_data["editor_label"] 313 | status_suffix = ( 314 | lang_data["completed"] 315 | if dynamic_state.stream_completed 316 | else lang_data["interrupted"] 317 | ) 318 | editor_label = f"{base_editor_label} - {status_suffix}" 319 | api_config = convo_state.get_api_config(selected_lang) 320 | new_bot_content = [ 321 | { 322 | "role": "assistant", 323 | "content": f"{selected_lang} - Running `{api_config['model']}` @ {api_config['url']}", 324 | "metadata": {"title": f"API INFO"}, 325 | } 326 | ] 327 | return [ 328 | gr.update(value=f"{lang_data['title']}"), 329 | gr.update( 330 | label=lang_data["prompt_label"], placeholder=lang_data["prompt_placeholder"] 331 | ), 332 | gr.update(label=editor_label, placeholder=lang_data["editor_placeholder"]), 333 | gr.update( 334 | label=lang_data["sync_threshold_label"], 335 | info=lang_data["sync_threshold_info"], 336 | ), 337 | gr.update( 338 | label=lang_data["throughput_label"], info=lang_data["throughput_info"] 339 | ), 340 | gr.update( 341 | value=( 342 | lang_data["pause_btn"] 343 | if dynamic_state.should_stream 344 | else lang_data["generate_btn"] 345 | ), 346 | variant="secondary" if dynamic_state.should_stream else "primary", 347 | ), 348 | gr.update(label=lang_data["language_label"]), 349 | gr.update( 350 | value=lang_data["clear_btn"], interactive=not dynamic_state.should_stream 351 | ), 352 | gr.update(value=lang_data["introduction"]), 353 | gr.update( 354 | value=lang_data["bot_default"] + new_bot_content, 355 | label=lang_data["bot_label"], 356 | ), 357 | gr.update(label=lang_data["result_editing_toggle"]), 358 | ] 359 | 360 | 361 | theme = gr.themes.Base(font="system-ui", primary_hue="stone") 362 | 363 | with gr.Blocks(theme=theme, css_paths="styles.css") as demo: 364 | DEFAULT_PERSISTENT = {"prompt_input": "", "thought_editor": ""} 365 | convo_state = gr.State(ConvoState) 366 | dynamic_state = gr.State(DynamicState) 367 | persistent_state = gr.BrowserState(DEFAULT_PERSISTENT) 368 | 369 | bot_default = LANGUAGE_CONFIG["en"]["bot_default"] + [ 370 | { 371 | "role": "assistant", 372 | "content": f"Running `{os.getenv('API_MODEL')}` @ {os.getenv('API_URL')}", 373 | "metadata": {"title": f"EN API INFO"}, 374 | } 375 | ] 376 | if secondary_api_exists: 377 | bot_default.append( 378 | { 379 | "role": "assistant", 380 | "content": f"Switch to zh ↗ to use SiliconFlow API: `{os.getenv('API_MODEL_2')}` @ {os.getenv('API_URL_2')}", 381 | "metadata": {"title": f"CN API INFO"}, 382 | } 383 | ) 384 | 385 | with gr.Row(variant=""): 386 | title_md = gr.Markdown( 387 | f"{LANGUAGE_CONFIG['en']['title']} ", 388 | container=False, 389 | ) 390 | lang_selector = gr.Dropdown( 391 | choices=["en", "zh"], 392 | value="en", 393 | elem_id="compact_lang_selector", 394 | scale=0, 395 | container=False, 396 | ) 397 | 398 | with gr.Row(equal_height=True): 399 | with gr.Column(scale=1, min_width=400): 400 | prompt_input = gr.Textbox( 401 | label=LANGUAGE_CONFIG["en"]["prompt_label"], 402 | lines=2, 403 | placeholder=LANGUAGE_CONFIG["en"]["prompt_placeholder"], 404 | max_lines=2, 405 | ) 406 | thought_editor = gr.Textbox( 407 | label=f"{LANGUAGE_CONFIG['en']['editor_label']} - {LANGUAGE_CONFIG['en']['editor_default']}", 408 | lines=16, 409 | max_lines=16, 410 | placeholder=LANGUAGE_CONFIG["en"]["editor_placeholder"], 411 | autofocus=True, 412 | elem_id="editor", 413 | ) 414 | with gr.Row(): 415 | control_button = gr.Button( 416 | value=LANGUAGE_CONFIG["en"]["generate_btn"], variant="primary" 417 | ) 418 | next_turn_btn = gr.Button( 419 | value=LANGUAGE_CONFIG["en"]["clear_btn"], interactive=True 420 | ) 421 | 422 | with gr.Column(scale=1, min_width=500): 423 | chatbot = gr.Chatbot( 424 | type="messages", 425 | height=300, 426 | value=bot_default, 427 | group_consecutive_messages=False, 428 | show_copy_all_button=True, 429 | show_share_button=True, 430 | label=LANGUAGE_CONFIG["en"]["bot_label"], 431 | ) 432 | with gr.Row(): 433 | sync_threshold_slider = gr.Slider( 434 | minimum=0, 435 | maximum=20, 436 | value=AppConfig.SYNC_THRESHOLD_DEFAULT, 437 | step=1, 438 | label=LANGUAGE_CONFIG["en"]["sync_threshold_label"], 439 | info=LANGUAGE_CONFIG["en"]["sync_threshold_info"], 440 | ) 441 | throughput_control = gr.Slider( 442 | minimum=1, 443 | maximum=100, 444 | value=AppConfig.DEFAULT_THROUGHPUT, 445 | step=1, 446 | label=LANGUAGE_CONFIG["en"]["throughput_label"], 447 | info=LANGUAGE_CONFIG["en"]["throughput_info"], 448 | ) 449 | result_editing_toggle = gr.Checkbox( 450 | label=LANGUAGE_CONFIG["en"]["result_editing_toggle"], 451 | interactive=True, 452 | scale=0, 453 | container=False, 454 | ) 455 | 456 | intro_md = gr.Markdown(LANGUAGE_CONFIG["en"]["introduction"], visible=False) 457 | 458 | @demo.load(inputs=[persistent_state], outputs=[prompt_input, thought_editor]) 459 | def recover_persistent_state(persistant_state): 460 | if persistant_state["prompt_input"] or persistant_state["thought_editor"]: 461 | return persistant_state["prompt_input"], persistant_state["thought_editor"] 462 | else: 463 | return gr.update(), gr.update() 464 | 465 | # 交互逻辑 466 | stateful_ui = (control_button, thought_editor, next_turn_btn) 467 | 468 | throughput_control.change( 469 | lambda val, s: setattr(s, "throughput", val), 470 | [throughput_control, convo_state], 471 | None, 472 | concurrency_limit=None, 473 | ) 474 | 475 | sync_threshold_slider.change( 476 | lambda val, s: setattr(s, "sync_threshold", val), 477 | [sync_threshold_slider, convo_state], 478 | None, 479 | concurrency_limit=None, 480 | ) 481 | 482 | def wrap_stream_generator(convo_state, dynamic_state, prompt, content): 483 | for response in convo_state.generate_ai_response( 484 | prompt, content, dynamic_state 485 | ): 486 | yield response + ( 487 | { 488 | "prompt_input": convo_state.current["user"], 489 | "thought_editor": convo_state.current["cot"], 490 | }, 491 | ) 492 | 493 | gr.on( 494 | [control_button.click, prompt_input.submit, thought_editor.submit], 495 | lambda d: d.control_button_handler(), 496 | [dynamic_state], 497 | stateful_ui, 498 | show_progress=False, 499 | concurrency_limit=None, 500 | ).then( 501 | wrap_stream_generator, 502 | [convo_state, dynamic_state, prompt_input, thought_editor], 503 | [thought_editor, thought_editor, chatbot, persistent_state], 504 | concurrency_limit=1000, 505 | ).then( 506 | lambda d: d.ui_state_controller(), 507 | [dynamic_state], 508 | stateful_ui, 509 | show_progress=False, 510 | concurrency_limit=None, 511 | ) 512 | 513 | next_turn_btn.click( 514 | lambda d: d.reset_workspace(), 515 | [dynamic_state], 516 | stateful_ui + (thought_editor, prompt_input, chatbot, persistent_state), 517 | concurrency_limit=None, 518 | show_progress=False, 519 | ) 520 | 521 | def toggle_editor_result(convo_state, allow): 522 | setattr(convo_state, "result_editing_toggle", allow) 523 | if allow: 524 | return gr.update(value=convo_state.current["raw"]) 525 | else: 526 | return gr.update(value=convo_state.current["cot"]) 527 | 528 | result_editing_toggle.change( 529 | toggle_editor_result, 530 | inputs=[convo_state, result_editing_toggle], 531 | outputs=[thought_editor], 532 | ) 533 | 534 | lang_selector.change( 535 | lambda lang, s, d: update_interface_language(lang, s, d), 536 | [lang_selector, convo_state, dynamic_state], 537 | [ 538 | title_md, 539 | prompt_input, 540 | thought_editor, 541 | sync_threshold_slider, 542 | throughput_control, 543 | control_button, 544 | lang_selector, 545 | next_turn_btn, 546 | intro_md, 547 | chatbot, 548 | result_editing_toggle, 549 | ], 550 | concurrency_limit=None, 551 | ) 552 | 553 | if __name__ == "__main__": 554 | demo.queue(default_concurrency_limit=1000) 555 | demo.launch() 556 | -------------------------------------------------------------------------------- /lang.py: -------------------------------------------------------------------------------- 1 | # lang.py 2 | LANGUAGE_CONFIG = { 3 | "en": { 4 | "title": "## CoT-Lab: Human-AI Co-Thinking Laboratory \nFollow, learn, and iterate the thought within one turn. Consider clone the repo and run with your own API key for better experience. \n GitHub: https://github.com/Intelligent-Internet/CoT-Lab-Demo", 5 | "prompt_label": "Task Description - Prompt", 6 | "prompt_placeholder": "Enter your prompt here...", 7 | "editor_label": "Thought Editor", 8 | "editor_placeholder": "The AI's thinking process will appear here... You can edit when it pauses", 9 | "generate_btn": "Generate", 10 | "pause_btn": "Pause", 11 | "sync_threshold_label": "🧠 Human Thinking Cadence", 12 | "sync_threshold_info": "Pause for human turn per X paragraphs", 13 | "throughput_label": "⏱ Sync Rate", 14 | "throughput_info": "Tokens/s - 5:Learn, 10:Follow, 50:Skim", 15 | "language_label": "Language", 16 | "loading_thinking": "🤖 AI Thinking ↓ Shift+Enter to Pause", 17 | "loading_output": "🖨️ Result Writing → (Pause & submit to reroll the result)", 18 | "interrupted": "🤔 Pause, Human thinking time - **EDIT THOUGHTS BELOW**", 19 | "completed": "✅ Completed → Check overview", 20 | "error": "Error", 21 | "api_config_label": "API Configuration", 22 | "api_key_label": "API Key", 23 | "api_key_placeholder": "Leave empty to use environment variable", 24 | "api_url_label": "API URL", 25 | "api_url_placeholder": "Leave empty for default URL", 26 | "clear_btn": "Clear Thinking", 27 | "introduction": "Think together with AI. Use `Shift+Enter` to toggle generation
You can modify the thinking process when AI pauses", 28 | "bot_label": "Conversation Overview", 29 | "bot_default": [ 30 | { 31 | "role": "assistant", 32 | "content": "Welcome to our co-thinking space! Ready to synchronize our cognitive rhythms? \n Shall we start by adjusting the throughput slider to match your reading pace? \n Enter your prompt, edit my thinking process when I pause, and let's begin weaving thoughts together →", 33 | }, 34 | ], 35 | "editor_default": "AI thought will start with this, leave blank to think freely", 36 | "waiting_api": "⏳ Waiting for API response", 37 | "api_retry": "🔁 API no response, hit Shift+Enter to try again.", 38 | "api_interrupted": "⚠️ Pasued, API connection interrupted. Hit Shift+Enter to reconnect", 39 | "result_editing_toggle": "Editor includes Result" 40 | 41 | }, 42 | "zh": { 43 | "title": "## CoT-Lab: 人机协同思维实验室\n在一轮对话中跟随、学习、迭代思维链。克隆Space并使用自己的API KEY可以获得更好的体验。 \n GitHub: https://github.com/Intelligent-Internet/CoT-Lab-Demo", 44 | "prompt_label": "任务描述 - 提示词", 45 | "prompt_placeholder": "在此输入您的问题...", 46 | "editor_label": "思维编辑器", 47 | "editor_placeholder": "AI的思考过程将在此显示...您可以在暂停的时候编辑", 48 | "generate_btn": "生成", 49 | "pause_btn": "暂停", 50 | "sync_threshold_label": "🧠 人类思考间隔", 51 | "sync_threshold_info": "段落数 - 每X段落自动暂停进入人类思考时间", 52 | "throughput_label": "⏱ 同步思考速度", 53 | "throughput_info": "词元/秒 - 5:学习, 10:跟读, 50:跳读", 54 | "language_label": "界面语言", 55 | "loading_thinking": "🤖 AI思考中 ↓ Shift+Enter可暂停", 56 | "loading_output": "🖨️ 结果输出中 → (暂停再提交会重新生成结果部分)", 57 | "interrupted": "🤔 暂停,人类思考回合 **下面的思考过程可以编辑**", 58 | "completed": "✅ 已完成 → 查看完整对话", 59 | "error": "错误", 60 | "api_config_label": "API配置", 61 | "api_key_label": "API密钥", 62 | "api_key_placeholder": "留空使用环境变量", 63 | "api_url_label": "API地址", 64 | "api_url_placeholder": "留空使用默认地址", 65 | "clear_btn": "清空思考", 66 | "introduction": "和AI一起思考,Shift+Enter切换生成状态
AI暂停的时候你可以编辑思维过程", 67 | "bot_label": "对话一览", 68 | "bot_default": [ 69 | { 70 | "role": "assistant", 71 | "content": "欢迎来到协同思考空间!准备好同步我们的认知节奏了吗?\n 建议先调整右侧的'同步思考速度'滑块,让它匹配你的阅读速度 \n 在左侧输入任务描述,在我暂停时修改我的思维,让我们开始编织思维链条 →", 72 | }, 73 | {"role": "assistant", "content": "**Shift+Enter** 可以暂停/继续AI生成"}, 74 | ], 75 | "editor_default": "AI思维会以此开头,留空即为默认思考", 76 | "waiting_api": "⏳ 等待API响应", 77 | "api_retry": "🔁 API无响应, Shift+Enter 重试一次试试?", 78 | "api_interrupted": "⚠️ 暂停,API连接意外中断,Shift+Enter 可重连", 79 | "result_editing_toggle": "编辑器包括最终答案" 80 | 81 | }, 82 | } 83 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | openai 2 | python-dotenv 3 | gradio -------------------------------------------------------------------------------- /styles.css: -------------------------------------------------------------------------------- 1 | footer{display:none !important} --------------------------------------------------------------------------------