├── README.md ├── gpt-stream-0610-latest.py ├── gpts.py ├── gpt软件-stream0125.py ├── requirements.txt └── templates └── g1.html /README.md: -------------------------------------------------------------------------------- 1 | # Vercel部署 2 | 2024.3.13新增vercel部署,请前往[chatgpt-python-vercel](https://github.com/buwanyuanshen/chatgpt-python-vercel) 3 | 4 | # GPT3.5/4.0聊天机器人 5 | 6 | 项目很简单,需要安装flask,openai(0.28.0),requests等库,运行只需要两个文件,一个py文件和对应的html,目录结构为: 7 | 8 | –xx.py 9 | 10 | –文件夹:templates 11 | 12 | ––xx.html 13 | 14 | # 使用方法: 15 | 将xx.py文件和templates文件夹放置同一目录下,g1.html放入templates文件夹中,运行xx.py即可。 16 | 17 | 本项目是一个使用 Python 编写的聊天机器人,基于最新的 GPT3.5/4.0模型。它可以在无需梯子的情况下运行,并且只需要输入 API Key 即可进行聊天,实现了流式输出,但未实现代码块高亮显示。 18 | 19 | 20 | # 项目gpt3.5效果如图: 21 | ![image](https://github.com/buwanyuanshen/Chatgpt-python/assets/144007759/b9c3b64d-8483-45d5-9ccd-548a2a96112e) 22 | ![image](https://github.com/buwanyuanshen/Chatgpt-python/assets/144007759/81188db0-c9ef-4ca4-840b-df8d26de2256) 23 | 24 | # 项目gpt4.0效果如图: 25 | ![image](https://github.com/buwanyuanshen/Chatgpt-python/assets/144007759/24374c6f-2b57-4e89-a4db-c8cceaed26c8) 26 | ![image](https://github.com/buwanyuanshen/Chatgpt-python/assets/144007759/eb2beaa6-496a-44c3-9330-f6cb6c747f28) 27 | 28 | # gpt-stream-0125效果图: 29 | ![_VXKRV_VT6S(H5}Y(E`9AMX](https://github.com/buwanyuanshen/chatgpt-python/assets/144007759/1a691719-20ce-4e22-b1dc-19d36ef6faee) 30 | 31 | 32 | 33 | # 截止2024.10.08最新ChatGPT免费使用网址,还在更新中!: 34 | 1. ChatGPT FREE:[ChatGPT FREE](https://gpt5.sbs) 35 | 2. Paint-Web:[Paint-Web](https://paint.gpt5.sbs) 36 | 3. ChatGPT-NextWeb:[ChatGPT-NextWeb](https://chatpro.icu) 37 | 4. ChatGPT-Lobe:[ChatGPT-Lobe](https://lobe.chatpro.icu) 38 | 5. Chat-MJ:[Chat-MJ](https://mj.chatpro.icu) 39 | 6. Chat-OpenWebUI:[Chat-MJ](https://open.chatpro.icu) 40 | 7. Chat-Plus-Free:[Chat-Plus-Free](https://free.chatpro.icu) 41 | 8. Chat-Plus-New-Charge:[Chat-Plus-New-Charge](https://gpt.chatpro.icu) 42 | 9. Chat-Plus-Old-Charge:[Chat-Plus-Old-Charge](https://gpts.chatpro.icu) 43 | 10. CF API:[CF API](https://api.gpt5.sbs) 44 | 11. CF SHOP:[CF API](https://shop.chatpro.icu) 45 | 12. 可以加入[q群226848325](https://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=1OOigjF5hxHUSQ5GE5U2UOIwswuckYOe&authKey=2pdTkM0NqehD2OuMojvBMnsmCAUcD6oO3ttDzS5CNle8tnre1a9Jp30aJZVUnC2c&noverify=0&group_code=226848325)学习交流! 46 | 47 | # 功能特点 48 | 49 | - 使用 Python 编写,运行环境为最新的 Python3版本 50 | - 支持直接调用 GPT3.5/4.0模型,无需使用梯子 51 | - 只需要输入个人的 API Key 即可开始聊天,支持多个APIiKey 52 | - 支持设置参数,包括 `max-tokens`、`temperature`、`role` 等 53 | - 提供了全部官方模型可供选择 54 | - 主分支(main)包含源码文件,最新源码文件为 `gpts.py` 和 `gpt软件-stream最新版.py` 55 | - 在使用前,请仔细阅读各自的 README 文件,以了解使用指南和注意事项 56 | - 在另一个软件分支中,你可以找到初版的 GPT3.5 软件和最新软件:gpt-stream-0610.exe(包含 GPT3.5 和 4.0 的所有模型),都可以通过调用个人的 API Key 进行使用,无需梯子。 57 | 58 | # 对话脚本使用设置 59 | 60 | 1. 访问项目的网页版链接分别为ChatGPT-3.5和ChatGPT-4.0,已关闭。 61 | 2. 在网页中输入你的个人 API Key。 62 | 3. 选择要使用的 GPT3.5 模型。 63 | 4. 根据需要设置参数,如 `max-tokens`、`temperature`、`role` 等。 64 | 5. 运行软件.py,与聊天机器人进行对话。 65 | 6.在另一个软件分支:可执行文件中,你可以找到初版的 GPT3.5 软件和最新软件:gpt-stream-0610.exe(包含 GPT3.5 和 4.0 的所有模型,可直接使用),都可以通过调用个人的 API Key 进行使用,无需梯子。 66 | 67 | # 注意事项 68 | 69 | - 在使用本项目之前,请确保你拥有有效的 API Key。 70 | - 请合理使用聊天机器人,遵守相关法律法规。 71 | - 如果你想深入了解项目的源码和使用方法,请仔细阅读相应的 README 文件。 72 | - 当前最新版本的源码文件为 `gpts.py+(templates(g1.html))` 和 `gpt软件-stream最新版.py`。 73 | - 在另一个软件分支中,你可以找到初版的 GPT3.5 软件和最新软件:gpt-stream-0610.exe(包含 GPT3.5 和 4.0 的所有模型),都可以通过调用个人的 API Key 进行使用,无需梯子。 74 | ## 其余项目网址 75 | 1.[ChatGPT-website-forward-vercel(前端)](https://github.com/buwanyuanshen/ChatGPT-website-forward-vercel),一键部署有流式。 76 | 77 | 2.[ChatGPT-website-vercel(后端)](https://github.com/buwanyuanshen/ChatGPT-website-vercel),一键部署无流式。 78 | 79 | 3.[ChatGPT-website-plus(后端)](https://github.com/buwanyuanshen/ChatGPT-website-plus),本地部署有流式。 80 | 81 | # 联系方式 82 | 83 | 如果你在使用过程中遇到任何问题或有任何建议,欢迎联系我们(唯一联系方式qq:1901304265,q群:226848325)。祝您使用愉快! 84 | -------------------------------------------------------------------------------- /gpt-stream-0610-latest.py: -------------------------------------------------------------------------------- 1 | import tkinter as tk 2 | from tkinter import ttk, scrolledtext, messagebox 3 | import requests 4 | import threading 5 | import json 6 | import pyperclip # 用于复制文本到剪贴板 7 | import datetime 8 | import pyttsx3 9 | 10 | messages = [] 11 | # 定义可供选择的模型 12 | available_models = { 13 | "gpt-3.5-turbo": "GPT-3.5-Turbo(4096tokens)", 14 | "gpt-3.5-turbo-0125": "GPT-3.5-Turbo-0125(4096tokens)", 15 | "gpt-3.5-turbo-1106": "GPT-3.5-Turbo-1106(4096tokens)", 16 | "gpt-3.5-turbo-0613": "GPT-3.5-Turbo-0613(4096tokens)", 17 | "gpt-3.5-turbo-0301": "GPT-3.5-Turbo-0301(4096tokens)", 18 | "gpt-3.5-turbo-16k": "GPT-3.5-Turbo-16k(16385tokens)", 19 | "gpt-3.5-turbo-16k-0613": "GPT-3.5-Turbo-16k-0613(16385tokens)", 20 | "gpt-4o": "GPT-4o(4096tokens,max:128000tokens)", 21 | "gpt-4o-2024-05-13": "GPT-4o-2024-05-13(4096tokens,max:128000tokens)", 22 | "gpt-4-turbo": "GPT-4-Turbo(4096tokens,max:128000tokens)", 23 | "gpt-4-turbo-2024-04-09": "GPT-4-Turbo-2024-04-09(4096tokens,max:128000tokens)", 24 | "gpt-4-turbo-preview": "GPT-4-Turbo-preview(4096tokens,max:128000tokens)", 25 | "gpt-4-0125-preview": "GPT-4-0125-preview(4096tokens,max:128000tokens)", 26 | "gpt-4-1106-preview": "GPT-4-1106-preview(4096tokens,max:128000tokens)", 27 | "gpt-4-vision-preview": "GPT-4-Vision-preview(4096tokens,max:128000tokens)", 28 | "gpt-4": "GPT-4(8192tokens)", 29 | "gpt-4-32k": "GPT-4-32k(32768tokens)", 30 | "gpt-4-0613": "GPT-4-0613(8192tokens)", 31 | "gpt-4-32k-0613": "GPT-4-32k-0613(32768tokens)", 32 | "gpt-4-0314": "GPT-4-0314(8192tokens)", 33 | "gpt-4-32k-0314": "GPT-4-32k-0314(32768tokens)", 34 | } 35 | # 默认参数值 36 | default_settings = { 37 | "selected_model": "gpt-3.5-turbo-0125", 38 | "custom_model": "", 39 | "system_message": "You are a helpful assistant.", 40 | "selected_api_key": "", 41 | "temperature": 0.5, 42 | "max_tokens": 3000, 43 | "continuous_chat": 0, 44 | "api_base": "https://api.openai-proxy.com", # 添加代理网址配置项 45 | "message_limit": 5, # 新增连续对话消息条数限制 46 | } 47 | 48 | # 读取配置文件 49 | def read_settings(): 50 | global selected_model,custom_model, system_message, selected_api_key, temperature, max_tokens, continuous_chat, api_base, message_limit 51 | try: 52 | with open("settings.txt", "r", encoding="utf-8") as f: 53 | data = json.load(f) 54 | selected_model = data.get("selected_model", default_settings["selected_model"]) 55 | custom_model = data.get("custom_model", default_settings["custom_model"]) 56 | system_message = data.get("system_message", default_settings["system_message"]) 57 | selected_api_key = data.get("selected_api_key", default_settings["selected_api_key"]) 58 | temperature = data.get("temperature", default_settings["temperature"]) 59 | max_tokens = data.get("max_tokens", default_settings["max_tokens"]) 60 | continuous_chat = data.get("continuous_chat", 0) # 读取连续对话状态,默认为关闭 61 | api_base = data.get("api_base", default_settings["api_base"]) # 读取代理网址配置项 62 | message_limit = data.get("message_limit", default_settings["message_limit"]) 63 | 64 | except FileNotFoundError: 65 | with open("settings.txt", "w", encoding="utf-8") as f: 66 | json.dump(default_settings, f, indent=4) 67 | selected_model = default_settings["selected_model"] 68 | custom_model = default_settings["custom_model"] 69 | system_message = default_settings["system_message"] 70 | selected_api_key = default_settings["selected_api_key"] 71 | temperature = default_settings["temperature"] 72 | max_tokens = default_settings["max_tokens"] 73 | continuous_chat = default_settings.get("continuous_chat", 0) 74 | api_base = default_settings["api_base"] 75 | message_limit = default_settings["message_limit"] 76 | 77 | except Exception as e: 78 | messagebox.showerror("Error reading settings:", str(e)) 79 | 80 | # 保存配置文件 81 | def save_settings(): 82 | global selected_model, custom_model, system_message, selected_api_key, temperature, max_tokens, continuous_chat, api_base, message_limit 83 | selected_model = model_select.get() 84 | custom_model = custom_model_entry.get().strip() 85 | system_message = system_message_text.get("1.0", "end-1c").strip() 86 | temperature = float(temperature_entry.get()) 87 | max_tokens = int(max_tokens_entry.get()) 88 | continuous_chat = continuous_chat_var.get() 89 | message_limit = int(message_limit_entry.get()) 90 | selected_api_key = api_key_entry.get().strip() 91 | api_base = api_base_entry.get().strip() 92 | 93 | data = { 94 | "selected_model": selected_model, 95 | "custom_model": custom_model, 96 | "system_message": system_message, 97 | "selected_api_key": selected_api_key, 98 | "temperature": temperature, 99 | "max_tokens": max_tokens, 100 | "continuous_chat": continuous_chat, # 保存连续对话状态 101 | "api_base": api_base, # 保存代理网址配置项 102 | "message_limit": message_limit # 保存连续对话消息条数限制 103 | } 104 | try: 105 | with open("settings.txt", "w", encoding="utf-8") as f: 106 | json.dump(data, f, indent=4, ensure_ascii=False) 107 | except Exception as e: 108 | messagebox.showerror("Error saving settings:", str(e)) 109 | 110 | 111 | # 初始化配置 112 | read_settings() 113 | # 定义一个变量用于保存当前界面状态 114 | simplified_state = False 115 | 116 | def toggle_simplified(): 117 | global simplified_state 118 | if simplified_state: 119 | # 显示所有界面元素 120 | model_label.grid(row=1, column=0, sticky="w") 121 | model_select.grid(row=1, column=1, padx=(0, 10), sticky="ew") 122 | custom_model_label.grid(row=1, column=2, sticky="w") 123 | custom_model_entry.grid(row=1, column=3, padx=(0, 10), sticky="ew") 124 | system_message_label.grid(row=2, column=0, sticky="w") 125 | system_message_text.grid(row=2, column=1, padx=(0, 10), sticky="ew", columnspan=3) 126 | api_base_label.grid(row=3, column=0, sticky="w") 127 | api_base_entry.grid(row=3, column=1, padx=(0, 10), sticky="ew", columnspan=3) 128 | show_api_base_check.grid(row=4, columnspan=4, pady=(5, 0), sticky="w") 129 | api_key_label.grid(row=5, column=0, sticky="w") 130 | api_key_entry.grid(row=5, column=1, padx=(0, 10), sticky="ew", columnspan=3) 131 | show_api_key_check.grid(row=6, columnspan=4, pady=(5, 0), sticky="w") 132 | temperature_label.grid(row=7, column=0, sticky="w") 133 | temperature_entry.grid(row=7, column=1, padx=(0, 10), sticky="ew", columnspan=3) 134 | max_tokens_label.grid(row=8, column=0, sticky="w") 135 | max_tokens_entry.grid(row=8, column=1, padx=(0, 10), sticky="ew", columnspan=3) 136 | continuous_chat_check.grid(row=9, columnspan=4, pady=(5, 0), sticky="w") 137 | message_limit_label.grid(row=10, column=0, sticky="w") 138 | message_limit_entry.grid(row=10, column=1, padx=(0, 10), sticky="ew", columnspan=3) 139 | user_input_label.grid(row=11, column=0, sticky="w") 140 | user_input_text.grid(row=11, column=1, padx=(0, 5), sticky="ew", columnspan=3) 141 | send_button.grid(row=12, column=0, padx=(20, 10), pady=(10, 0), sticky="w") 142 | clear_button.grid(row=12, column=1, padx=(10, 20), pady=(10, 0), sticky="n") 143 | simplified_button.grid(row=2, column=0, padx=(10, 15), pady=(10, 0), sticky="n") 144 | simplified_state = False 145 | else: 146 | # 隐藏部分界面元素 147 | model_label.grid_forget() 148 | model_select.grid_forget() 149 | custom_model_label.grid_forget() 150 | custom_model_entry.grid_forget() 151 | system_message_label.grid_forget() 152 | system_message_text.grid_forget() 153 | api_key_label.grid_forget() 154 | api_key_entry.grid_forget() 155 | show_api_key_check.grid_forget() 156 | temperature_label.grid_forget() 157 | temperature_entry.grid_forget() 158 | max_tokens_label.grid_forget() 159 | max_tokens_entry.grid_forget() 160 | continuous_chat_check.grid_forget() 161 | api_base_label.grid_forget() 162 | api_base_entry.grid_forget() 163 | simplified_state = True 164 | 165 | def get_response_thread(): 166 | global selected_model, custom_model, system_message, selected_api_key, temperature, max_tokens, continuous_chat, api_base, message_limit 167 | user_input = user_input_text.get("1.0", "end-1c").strip() 168 | if not user_input: 169 | return 170 | user_input_text.delete("1.0", tk.END) 171 | selected_model = model_select.get() 172 | custom_model = custom_model_entry.get().strip() 173 | if custom_model: 174 | selected_model = custom_model 175 | system_message = system_message_text.get("1.0", "end-1c").strip() 176 | temperature = float(temperature_entry.get()) 177 | max_tokens = int(max_tokens_entry.get()) 178 | continuous_chat = continuous_chat_var.get() 179 | message_limit = int(message_limit_entry.get()) 180 | response_text_box.config(state=tk.NORMAL) 181 | response_text_box.insert(tk.END, "\n\n" + "用户: " + user_input + "\n", "user") 182 | response_text_box.tag_configure("user", foreground="blue") 183 | response_text_box.insert(tk.END, f"{selected_model}: ") 184 | response_text_box.config(state=tk.DISABLED) 185 | response_text_box.see(tk.END) 186 | 187 | selected_api_key = api_key_entry.get().strip() 188 | api_base = api_base_entry.get().strip() 189 | if continuous_chat == 0 or len(messages) == 0: 190 | messages.clear() 191 | messages.append({"role": "system", "content": system_message}) 192 | messages.append({"role": "user", "content": user_input}) 193 | else: 194 | messages.append({"role": "system", "content": system_message}) 195 | messages.append({"role": "user", "content": user_input}) 196 | 197 | if len(messages) > message_limit: 198 | messages.pop(0) 199 | 200 | headers = { 201 | "Content-Type": "application/json", 202 | "Authorization": f"Bearer {selected_api_key}" 203 | } 204 | data = { 205 | "model": selected_model, 206 | "messages": messages, 207 | "temperature": temperature, 208 | "max_tokens": max_tokens, 209 | "stream": True # Enable streaming 210 | } 211 | url = f"{api_base}/v1/chat/completions" 212 | 213 | def update_response_text(response_text): 214 | response_text_box.config(state=tk.NORMAL) 215 | response_text_box.insert(tk.END, response_text) 216 | response_text_box.config(state=tk.DISABLED) 217 | response_text_box.see(tk.END) 218 | response_text_box.yview_moveto(1.0) 219 | 220 | try: 221 | response = requests.post(url, headers=headers, json=data, stream=True) 222 | response.raise_for_status() 223 | 224 | errorStr = "" 225 | for chunk in response.iter_lines(): 226 | if chunk: 227 | streamStr = chunk.decode("utf-8").replace("data: ", "") 228 | try: 229 | streamStr = streamStr.strip("[DONE]") 230 | streamDict = json.loads(streamStr) 231 | except: 232 | errorStr += streamStr.strip() 233 | continue 234 | 235 | if "choices" in streamDict: 236 | delData = streamDict["choices"][0] 237 | if streamDict["model"] is None: 238 | break 239 | else: 240 | if "delta" in delData and "content" in delData["delta"]: 241 | respStr = delData["delta"]["content"] 242 | update_response_text(respStr) 243 | messages.append({"role": "assistant", "content": respStr}) 244 | else: 245 | errorStr += f"Unexpected data format: {streamDict}" 246 | except requests.exceptions.RequestException as e: 247 | error_message = f"Request error: {str(e)}" 248 | messages.append({"role": "assistant", "content": error_message}) 249 | update_response_text("\n" + error_message) 250 | except Exception as e: 251 | error_message = f"Error: {str(e)}" 252 | messages.append({"role": "assistant", "content": error_message}) 253 | update_response_text("\n" + error_message) 254 | def get_response(): 255 | # 使用 threading 创建一个新线程来执行 get_response_thread 函数 256 | response_thread = threading.Thread(target=get_response_thread) 257 | response_thread.start() 258 | 259 | def clear_history(): 260 | global messages 261 | messages.clear() 262 | response_text_box.config(state=tk.NORMAL) 263 | response_text_box.delete("1.0", tk.END) 264 | response_text_box.config(state=tk.DISABLED) 265 | user_input_text.delete("1.0", tk.END) 266 | 267 | def set_api_key_show_state(): 268 | global selected_api_key 269 | show = show_api_key_var.get() 270 | if show: 271 | api_key_entry.config(show="") 272 | else: 273 | api_key_entry.config(show="*") 274 | selected_api_key = api_key_entry.get().strip() 275 | 276 | def set_api_base_show_state(): 277 | global api_base 278 | show = show_api_base_var.get() 279 | if show: 280 | api_base_entry.config(show="") 281 | else: 282 | api_base_entry.config(show="*") 283 | api_base = api_base_entry.get().strip() 284 | 285 | def copy_text_to_clipboard(text): 286 | pyperclip.copy(text) 287 | 288 | 289 | # 创建tkinter窗口 290 | root = tk.Tk() 291 | root.title("FREE ChatGPT-----Stream-----MADE-----BY-----中北锋哥-----2024.6.10") 292 | 293 | # 主题和风格 294 | style = ttk.Style() 295 | style.theme_use("alt") 296 | style.configure("TFrame", background="lightblue") 297 | style.configure("TButton", padding=2, relief="flat", foreground="black", background="lightblue") 298 | style.configure("TLabel", padding=1, foreground="black", background="lightblue", font=("Arial", 11)) 299 | style.configure("TEntry", padding=1, foreground="black", insertcolor="lightblue") 300 | style.configure("TCheckbutton", padding=5, font=("Helvetica", 11), background="lightblue", foreground="blue") 301 | 302 | # 创建界面元素 303 | frame_left = ttk.Frame(root) 304 | frame_left.grid(row=0, column=0, padx=5, pady=5, sticky="nsew") 305 | frame_right = ttk.Frame(root) 306 | frame_right.grid(row=0, column=1, padx=5, pady=5, sticky="nsew") 307 | 308 | # 设置权重,使得窗口大小变化时元素可以调整 309 | root.grid_columnconfigure(0, weight=1) 310 | root.grid_rowconfigure(0, weight=1) 311 | 312 | # 设置frame_left的权重,使得元素可以水平拉伸 313 | frame_left.grid_columnconfigure(1, weight=1) 314 | frame_left.grid_columnconfigure(3, weight=1) # 为自定义模型输入框设置权重 315 | 316 | # 设置frame_right的权重,使得元素可以水平和垂直拉伸 317 | frame_right.grid_columnconfigure(0, weight=1) 318 | frame_right.grid_rowconfigure(1, weight=1) 319 | 320 | model_label = ttk.Label(frame_left, text="选择GPT模型:") 321 | model_label.grid(row=1, column=0, sticky="w") 322 | 323 | model_select = ttk.Combobox(frame_left, values=list(available_models.keys()), state="readonly", font=("黑体", 11)) 324 | model_select.set(selected_model) 325 | model_select.grid(row=1, column=1, padx=(0, 5), sticky="ew") 326 | 327 | custom_model_label = ttk.Label(frame_left, text="自定义模型:") 328 | custom_model_label.grid(row=1, column=2, sticky="w") 329 | 330 | custom_model_entry = ttk.Entry(frame_left, width=20, font=("黑体", 11)) 331 | custom_model_entry.insert(0,custom_model) 332 | custom_model_entry.grid(row=1, column=3, padx=(0, 5), sticky="ew") 333 | 334 | system_message_label = ttk.Label(frame_left, text="系统角色:") 335 | system_message_label.grid(row=2, column=0, sticky="w") 336 | 337 | system_message_text = scrolledtext.ScrolledText(frame_left, width=45, height=3, font=("黑体", 12)) 338 | system_message_text.insert(tk.END, system_message) 339 | system_message_text.grid(row=2, column=1, padx=(0, 5), sticky="ew", columnspan=3) 340 | 341 | api_base_label = ttk.Label(frame_left, text="API 代理网址:") # 新增代理网址配置项 342 | api_base_label.grid(row=3, column=0, sticky="w") 343 | 344 | api_base_entry = ttk.Entry(frame_left, width=50, show="*", font=("Arial", 11)) 345 | api_base_entry.insert(0, api_base) 346 | api_base_entry.grid(row=3, column=1, padx=(0, 5), sticky="ew", columnspan=3) 347 | 348 | show_api_base_var = tk.IntVar() 349 | show_api_base_check = ttk.Checkbutton(frame_left, text="显示 API Base", variable=show_api_base_var, command=set_api_base_show_state) 350 | show_api_base_check.grid(row=4, columnspan=4, pady=(5, 0), sticky="w") 351 | 352 | api_key_label = ttk.Label(frame_left, text="API 密钥(sk-/sess-):") 353 | api_key_label.grid(row=5, column=0, sticky="w") 354 | 355 | api_key_entry = ttk.Entry(frame_left, width=50, show="*", font=("Arial", 11)) 356 | api_key_entry.insert(0, selected_api_key) 357 | api_key_entry.grid(row=5, column=1, padx=(0, 5), sticky="ew", columnspan=3) 358 | 359 | show_api_key_var = tk.IntVar() 360 | show_api_key_check = ttk.Checkbutton(frame_left, text="显示 API Key", variable=show_api_key_var, command=set_api_key_show_state) 361 | show_api_key_check.grid(row=6, columnspan=4, pady=(5, 0), sticky="w") 362 | 363 | temperature_label = ttk.Label(frame_left, text="Temperature:") 364 | temperature_label.grid(row=7, column=0, sticky="w") 365 | 366 | temperature_entry = ttk.Entry(frame_left, width=45, font=("Arial", 11)) 367 | temperature_entry.insert(0, temperature) 368 | temperature_entry.grid(row=7, column=1, padx=(0, 5), sticky="ew", columnspan=3) 369 | 370 | max_tokens_label = ttk.Label(frame_left, text="Max-Tokens:") 371 | max_tokens_label.grid(row=8, column=0, sticky="w") 372 | 373 | max_tokens_entry = ttk.Entry(frame_left, width=45, font=("Arial", 11)) 374 | max_tokens_entry.insert(0, max_tokens) 375 | max_tokens_entry.grid(row=8, column=1, padx=(0, 5), sticky="ew", columnspan=3) 376 | 377 | continuous_chat_var = tk.IntVar() 378 | continuous_chat_var.set(continuous_chat) 379 | continuous_chat_check = ttk.Checkbutton(frame_left, text="开启连续对话", variable=continuous_chat_var) 380 | continuous_chat_check.grid(row=9, columnspan=4, pady=(5, 0), sticky="w") 381 | 382 | message_limit_label = ttk.Label(frame_left, text="连续对话消息条数:") 383 | message_limit_label.grid(row=10, column=0, sticky="w") 384 | 385 | message_limit_entry = ttk.Entry(frame_left, width=45, font=("Arial", 11)) 386 | message_limit_entry.insert(0, message_limit) 387 | message_limit_entry.grid(row=10, column=1, padx=(0, 5), sticky="ew", columnspan=3) 388 | 389 | user_input_label = ttk.Label(frame_left, text="用户消息输入框:") 390 | user_input_label.grid(row=11, column=0, sticky="w") 391 | 392 | # 用户输入框设置为自适应宽度 393 | user_input_text = scrolledtext.ScrolledText(frame_left, height=11, width=40, font=("宋体", 11)) 394 | user_input_text.grid(row=11, column=1, padx=(0, 5), sticky="ew", columnspan=3) 395 | user_input_text.grid_columnconfigure(1, weight=1) # 设置输入框列的权重 396 | 397 | response_label = ttk.Label(frame_right, text="对话消息记录框:") 398 | response_label.grid(row=0, column=0, columnspan=2, sticky="w") 399 | 400 | response_text_box = scrolledtext.ScrolledText(frame_right, height=25, width=50, state=tk.DISABLED, font=("宋体", 11)) 401 | response_text_box.grid(row=1, column=0, columnspan=2, sticky="nsew") 402 | response_text_box.grid_columnconfigure(1, weight=1) # 设置输入框列的权重 403 | 404 | # 发送按钮设置为自适应宽度 405 | send_button = ttk.Button(frame_left, text="发送(Ctrl+Enter)", command=get_response, width=15) 406 | send_button.grid(row=12, column=0, padx=(20, 10), pady=(10, 0), sticky="w") 407 | 408 | clear_button = ttk.Button(frame_left, text="清空对话历史", command=clear_history, width=15) 409 | clear_button.grid(row=12, column=1, padx=(10, 20), pady=(10, 0), sticky="n") 410 | 411 | simplified_button = ttk.Button(frame_right, text="显示/隐藏参数", command=toggle_simplified, width=15) 412 | simplified_button.grid(row=2, column=0, padx=(10, 15), pady=(10, 0), sticky="n") 413 | 414 | def on_enter_key(event): 415 | get_response() 416 | 417 | # 在user_input_text上绑定Enter键事件 418 | user_input_text.bind("", on_enter_key) 419 | 420 | def copy_user_message(): 421 | response_text = response_text_box.get("end-2l linestart", "end-1c") 422 | if response_text.strip() != "": 423 | user_input = response_text.strip().split("\n")[0].replace("用户: ", "") 424 | copy_text_to_clipboard(user_input) 425 | 426 | def copy_assistant_message(): 427 | response_text = response_text_box.get("end-2l linestart", "end-1c") 428 | if response_text.strip() != "": 429 | assistant_message = response_text.strip().split("\n")[1].replace(f"{selected_model}: ", "") 430 | copy_text_to_clipboard(assistant_message) 431 | 432 | def save_chat_history(): 433 | chat_history = response_text_box.get("1.0", "end-1c") 434 | if chat_history.strip() != "": 435 | current_time = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") 436 | filename = f"chat_history_{current_time}.txt" 437 | with open(filename, "w", encoding="utf-8") as file: 438 | file.write(chat_history) 439 | 440 | user_copy_button = ttk.Button(frame_left, text="复制用户消息", command=copy_user_message, width=15) 441 | user_copy_button.grid(row=13, column=0, padx=(20, 10), pady=(5, 0), sticky="w") 442 | 443 | assistant_copy_button = ttk.Button(frame_left, text="复制GPT消息", command=copy_assistant_message, width=15) 444 | assistant_copy_button.grid(row=13, column=1, padx=(10, 20), pady=(5, 0), sticky="n") 445 | 446 | save_history_button = ttk.Button(frame_right, text="保存对话记录", command=save_chat_history, width=15) 447 | save_history_button.grid(row=2, column=1, padx=(10, 15), pady=(10, 0), sticky="n") 448 | 449 | def play_user_message(): 450 | def play_audio(): 451 | engine = pyttsx3.init() 452 | user_message = response_text_box.get("end-2l linestart", "end-1c").strip().split("\n")[0].replace("用户: ", "") 453 | engine.setProperty('rate', 200) 454 | engine.setProperty('volume', 1) 455 | engine.setProperty('pitch', 2) 456 | engine.say(user_message) 457 | engine.runAndWait() 458 | engine.stop() 459 | 460 | audio_thread = threading.Thread(target=play_audio) 461 | audio_thread.start() 462 | 463 | play_button1 = ttk.Button(frame_left, text="播放用户消息", command=play_user_message, width=15) 464 | play_button1.grid(row=14, column=0, padx=(20, 10), pady=(5, 0), sticky="w") 465 | 466 | def play_assistant_message(): 467 | def play_audio(): 468 | engine = pyttsx3.init() 469 | assistant_message = response_text_box.get("end-2l linestart", "end-1c").strip().split("\n")[1].replace(f"{selected_model}: ", "") 470 | engine.setProperty('rate', 222) 471 | engine.setProperty('volume', 1) 472 | engine.setProperty('pitch', 5) 473 | engine.say(assistant_message) 474 | engine.runAndWait() 475 | engine.stop() 476 | 477 | audio_thread = threading.Thread(target=play_audio) 478 | audio_thread.start() 479 | 480 | play_button2 = ttk.Button(frame_left, text="播放GPT回复", command=play_assistant_message, width=15) 481 | play_button2.grid(row=14, column=1, padx=(10, 20), pady=(5, 0), sticky="n") 482 | 483 | def on_closing(): 484 | save_settings() 485 | save_chat_history() 486 | root.destroy() 487 | 488 | root.protocol("WM_DELETE_WINDOW", on_closing) 489 | # 运行窗口循环 490 | root.mainloop() 491 | -------------------------------------------------------------------------------- /gpts.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, render_template, request, Response, jsonify 2 | import openai 3 | import random 4 | import threading 5 | 6 | # 设置代理网址 7 | openai.api_base = "https://api.openai-proxy.com/v1" 8 | 9 | # 替换为您自己的OpenAI API密钥列表 10 | api_keys = [ 11 | "", 12 | ] 13 | 14 | # 创建Flask应用程序 15 | app = Flask(__name__) 16 | # 定义可供选择的模型 17 | available_models = { 18 | "gpt-3.5-turbo-0125": "GPT-3.5-Turbo-0125(4096tokens)", 19 | "gpt-3.5-turbo-1106": "GPT-3.5-Turbo-1106(4096tokens)", 20 | "gpt-3.5-turbo": "GPT-3.5-Turbo(4096tokens)", 21 | "gpt-3.5-turbo-16k": "GPT-3.5-Turbo-16k(16385tokens)", 22 | "gpt-3.5-turbo-0613": "GPT-3.5-Turbo-0613(4096tokens)", 23 | "gpt-3.5-turbo-16k-0613": "GPT-3.5-Turbo-16k-0613(16385tokens)", 24 | "gpt-3.5-turbo-0301": "GPT-3.5-Turbo-0301(4096tokens)", 25 | "gpt-4-turbo-preview": "GPT-4-Turbo-preview(4096tokens,max:128000tokens)", 26 | "gpt-4-0125-preview": "GPT-4-0125-preview(4096tokens,max:128000tokens)", 27 | "gpt-4-1106-preview": "GPT-4-1106-preview(4096tokens,max:128000tokens)", 28 | "gpt-4-vision-preview": "GPT-4-Vision-preview(4096tokens,max:128000tokens)", 29 | "gpt-4": "GPT-4(8192tokens)", 30 | "gpt-4-32k": "GPT-4-32k(32768tokens)", 31 | "gpt-4-0613": "GPT-4-0613(8192tokens)", 32 | "gpt-4-32k-0613": "GPT-4-32k-0613(32768tokens)", 33 | "gpt-4-0314": "GPT-4-0314(8192tokens)", 34 | "gpt-4-32k-0314": "GPT-4-32k-0314(32768tokens)", 35 | } 36 | 37 | 38 | messages = [] 39 | messages_lock = threading.Lock() 40 | 41 | 42 | @app.route('/') 43 | def index(): 44 | 45 | return render_template('g1.html', models=available_models) 46 | 47 | @app.route('/clear_history', methods=['POST']) 48 | def clear_history(): 49 | with messages_lock: 50 | messages.clear() # 清除所有对话历史 51 | return jsonify({"message": "对话历史已清除"}) 52 | 53 | @app.route('/get_response', methods=['POST']) 54 | def get_response(): 55 | user_input = request.json.get('user_input') 56 | selected_model = request.json.get('selected_model') 57 | system_message = request.json.get('system_message') 58 | temperature = float(request.json.get('temperature')) 59 | max_tokens = int(request.json.get('max_tokens')) 60 | continuous_chat = request.json.get('continuous_chat') 61 | 62 | with messages_lock: 63 | if continuous_chat == "false": 64 | messages.clear() # 只有在禁用连续对话时清空消息列表 65 | messages.append({"role": "system", "content": system_message}) 66 | messages.append({"role": "user", "content": user_input}) 67 | else: 68 | messages.append({"role": "system", "content": system_message}) 69 | messages.append({"role": "user", "content": user_input}) 70 | selected_api_key = random.choice(api_keys) 71 | openai.api_key = selected_api_key.strip() 72 | if len(messages) > 1: 73 | messages.pop(0) 74 | 75 | response = openai.ChatCompletion.create( 76 | model=selected_model, 77 | messages=messages, 78 | temperature=temperature, 79 | max_tokens=max_tokens, 80 | stream=True 81 | ) 82 | collected_messages = [] 83 | # 在你的 get_stream 函数中 84 | def get_stream(tar_get_response): 85 | try: 86 | for chunk in tar_get_response: 87 | chunk_message = chunk['choices'][0]['delta'] 88 | collected_messages.append(chunk_message) 89 | if chunk_message != "": 90 | try: 91 | content = chunk_message["content"] 92 | for line in content.split("\n"): 93 | messages.append({"role": "assistant", "content": line}) 94 | yield line 95 | 96 | except Exception as e: 97 | yield "\n" 98 | 99 | except GeneratorExit: 100 | return 101 | 102 | return Response(get_stream(response), content_type='text/html') 103 | if __name__ == '__main__': 104 | app.run('0.0.0.0', 80) 105 | 106 | -------------------------------------------------------------------------------- /gpt软件-stream0125.py: -------------------------------------------------------------------------------- 1 | import tkinter as tk 2 | from tkinter import ttk, scrolledtext, messagebox 3 | import openai 4 | import threading 5 | import json 6 | import pyperclip # 用于复制文本到剪贴板 7 | import datetime 8 | import pyttsx3 9 | 10 | # 设置代理网址 11 | openai.api_base = "https://api.openai-proxy.com/v1" 12 | messages = [] 13 | # 定义可供选择的模型 14 | available_models = { 15 | "gpt-3.5-turbo-0125": "GPT-3.5-Turbo-0125(4096tokens)", 16 | "gpt-3.5-turbo-1106": "GPT-3.5-Turbo-1106(4096tokens)", 17 | "gpt-3.5-turbo": "GPT-3.5-Turbo(4096tokens)", 18 | "gpt-3.5-turbo-16k": "GPT-3.5-Turbo-16k(16385tokens)", 19 | "gpt-3.5-turbo-0613": "GPT-3.5-Turbo-0613(4096tokens)", 20 | "gpt-3.5-turbo-16k-0613": "GPT-3.5-Turbo-16k-0613(16385tokens)", 21 | "gpt-3.5-turbo-0301": "GPT-3.5-Turbo-0301(4096tokens)", 22 | "gpt-4-turbo-preview": "GPT-4-Turbo-preview(4096tokens,max:128000tokens)", 23 | "gpt-4-0125-preview": "GPT-4-0125-preview(4096tokens,max:128000tokens)", 24 | "gpt-4-1106-preview": "GPT-4-1106-preview(4096tokens,max:128000tokens)", 25 | "gpt-4-vision-preview": "GPT-4-Vision-preview(4096tokens,max:128000tokens)", 26 | "gpt-4": "GPT-4(8192tokens)", 27 | "gpt-4-32k": "GPT-4-32k(32768tokens)", 28 | "gpt-4-0613": "GPT-4-0613(8192tokens)", 29 | "gpt-4-32k-0613": "GPT-4-32k-0613(32768tokens)", 30 | "gpt-4-0314": "GPT-4-0314(8192tokens)", 31 | "gpt-4-32k-0314": "GPT-4-32k-0314(32768tokens)", 32 | } 33 | # 默认参数值 34 | default_settings = { 35 | "selected_model": "gpt-3.5-turbo-0125", 36 | "system_message": "You are a helpful assistant.", 37 | "selected_api_key": "", 38 | "temperature": 0.5, 39 | "max_tokens": 3000, 40 | "continuous_chat": 0, 41 | "api_base": "https://api.openai-proxy.com/v1", # 添加代理网址配置项 42 | } 43 | 44 | # 读取配置文件 45 | def read_settings(): 46 | global selected_model, system_message, selected_api_key, temperature, max_tokens, continuous_chat, api_base 47 | try: 48 | with open("settings.txt", "r", encoding="utf-8") as f: 49 | data = json.load(f) 50 | selected_model = data.get("selected_model", default_settings["selected_model"]) 51 | system_message = data.get("system_message", default_settings["system_message"]) 52 | selected_api_key = data.get("selected_api_key", default_settings["selected_api_key"]) 53 | temperature = data.get("temperature", default_settings["temperature"]) 54 | max_tokens = data.get("max_tokens", default_settings["max_tokens"]) 55 | continuous_chat = data.get("continuous_chat", 0) # 读取连续对话状态,默认为关闭 56 | api_base = data.get("api_base", default_settings["api_base"]) # 读取代理网址配置项 57 | 58 | except FileNotFoundError: 59 | with open("settings.txt", "w", encoding="utf-8") as f: 60 | json.dump(default_settings, f, indent=4) 61 | selected_model = default_settings["selected_model"] 62 | system_message = default_settings["system_message"] 63 | selected_api_key = default_settings["selected_api_key"] 64 | temperature = default_settings["temperature"] 65 | max_tokens = default_settings["max_tokens"] 66 | continuous_chat = default_settings.get("continuous_chat", 0) 67 | api_base = default_settings["api_base"] 68 | 69 | except Exception as e: 70 | messagebox.showerror("Error reading settings:", str(e)) 71 | 72 | # 保存配置文件 73 | def save_settings(): 74 | global selected_model, system_message, selected_api_key, temperature, max_tokens, continuous_chat, api_base 75 | data = { 76 | "selected_model": selected_model, 77 | "system_message": system_message, 78 | "selected_api_key": selected_api_key, 79 | "temperature": temperature, 80 | "max_tokens": max_tokens, 81 | "continuous_chat": continuous_chat, # 保存连续对话状态 82 | "api_base": api_base, # 保存代理网址配置项 83 | } 84 | try: 85 | with open("settings.txt", "w", encoding="utf-8") as f: 86 | json.dump(data, f, indent=4, ensure_ascii=False) 87 | except Exception as e: 88 | messagebox.showerror("Error saving settings:", str(e)) 89 | 90 | # 初始化配置 91 | read_settings() 92 | # 定义一个变量用于保存当前界面状态 93 | simplified_state = False 94 | 95 | def toggle_simplified(): 96 | global simplified_state 97 | if simplified_state: 98 | # 显示所有界面元素 99 | model_label.grid(row=1, column=0, sticky="w") 100 | model_select.grid(row=1, column=1, padx=(0, 10), sticky="ew") 101 | system_message_label.grid(row=2, column=0, sticky="w") 102 | system_message_text.grid(row=2, column=1, padx=(0, 10), sticky="ew") 103 | api_base_label.grid(row=3, column=0, sticky="w") 104 | api_base_entry.grid(row=3, column=1, padx=(0, 10), sticky="ew") 105 | show_api_base_check.grid(row=4, columnspan=2, pady=(5, 0), sticky="w") 106 | api_key_label.grid(row=5, column=0, sticky="w") 107 | api_key_entry.grid(row=5, column=1, padx=(0, 10), sticky="ew") 108 | show_api_key_check.grid(row=6, columnspan=2, pady=(5, 0), sticky="w") 109 | temperature_label.grid(row=7, column=0, sticky="w") 110 | temperature_entry.grid(row=7, column=1, padx=(0, 10), sticky="ew") 111 | max_tokens_label.grid(row=8, column=0, sticky="w") 112 | max_tokens_entry.grid(row=8, column=1, padx=(0, 10), sticky="ew") 113 | continuous_chat_check.grid(row=9, columnspan=2, pady=(5, 0), sticky="w") 114 | user_input_label.grid(row=10, column=0, sticky="w") 115 | user_input_text.grid(row=10, column=1, padx=(0, 5), sticky="ew") 116 | send_button.grid(row=11, column=0, padx=(20, 10), pady=(10, 0), sticky="w") 117 | clear_button.grid(row=11, column=1, padx=(10, 20), pady=(10, 0), sticky="n") 118 | simplified_button.grid(row=2, column=0, padx=(10, 15), pady=(10, 0), sticky="n") 119 | simplified_state = False 120 | else: 121 | # 隐藏部分界面元素 122 | model_label.grid_forget() 123 | model_select.grid_forget() 124 | system_message_label.grid_forget() 125 | system_message_text.grid_forget() 126 | api_key_label.grid_forget() 127 | api_key_entry.grid_forget() 128 | show_api_key_check.grid_forget() 129 | temperature_label.grid_forget() 130 | temperature_entry.grid_forget() 131 | max_tokens_label.grid_forget() 132 | max_tokens_entry.grid_forget() 133 | continuous_chat_check.grid_forget() 134 | api_base_label.grid_forget() 135 | api_base_entry.grid_forget() 136 | simplified_state = True 137 | 138 | def get_response_thread(): 139 | global selected_model, system_message, selected_api_key, temperature, max_tokens, continuous_chat, api_base 140 | user_input = user_input_text.get("1.0", "end-1c").rstrip("\n") 141 | # 检查用户输入是否为空 142 | if not user_input.strip(): 143 | return 144 | user_input_text.delete("1.0", tk.END) 145 | selected_model = model_select.get() 146 | system_message = system_message_text.get("1.0", "end-1c") 147 | temperature = float(temperature_entry.get()) 148 | max_tokens = int(max_tokens_entry.get()) 149 | continuous_chat = continuous_chat_var.get() 150 | response_text_box.config(state=tk.NORMAL) 151 | response_text_box.insert(tk.END, "\n\n"+"用户: "+ user_input+"\n", "user") 152 | response_text_box.tag_configure("user", foreground="blue") 153 | response_text_box.insert(tk.END, f"{available_models[selected_model]}: ") 154 | response_text_box.config(state=tk.DISABLED) 155 | response_text_box.see(tk.END) 156 | 157 | if continuous_chat == 0: 158 | messages.clear() 159 | messages.append({"role": "system", "content": system_message}) 160 | messages.append({"role": "user", "content": user_input}) 161 | else: 162 | messages.append({"role": "system", "content": system_message}) 163 | messages.append({"role": "user", "content": user_input}) 164 | 165 | selected_api_key = api_key_entry.get().strip() 166 | api_base = api_base_entry.get().strip() # 获取代理网址输入 167 | openai.api_key = selected_api_key 168 | openai.api_base = api_base # 设置代理网址 169 | 170 | if len(messages) > 5: 171 | messages.pop(0) 172 | 173 | response = openai.ChatCompletion.create( 174 | model=selected_model, 175 | messages=messages, 176 | temperature=temperature, 177 | max_tokens=max_tokens, 178 | stream=True 179 | ) 180 | 181 | def update_response_text(response_text): 182 | response_text_box.config(state=tk.NORMAL) 183 | response_text_box.insert(tk.END, response_text) 184 | response_text_box.config(state=tk.DISABLED) 185 | response_text_box.see(tk.END) 186 | response_text_box.yview_moveto(1.0) 187 | 188 | for chunk in response: 189 | if 'choices' in chunk and len(chunk['choices']) > 0: 190 | chunk_message = chunk['choices'][0]['delta'] 191 | if chunk_message != "": 192 | try: 193 | content = chunk_message["content"] 194 | for line in content.split("\n"): 195 | messages.append({"role": "assistant", "content": line}) 196 | update_response_text(line) 197 | except Exception as e: 198 | pass 199 | 200 | def get_response(): 201 | # 使用 threading 创建一个新线程来执行 get_response_thread 函数 202 | response_thread = threading.Thread(target=get_response_thread) 203 | response_thread.start() 204 | 205 | def clear_history(): 206 | global messages 207 | messages.clear() 208 | response_text_box.config(state=tk.NORMAL) 209 | response_text_box.delete("1.0", tk.END) 210 | response_text_box.config(state=tk.DISABLED) 211 | user_input_text.delete("1.0", tk.END) 212 | 213 | def set_api_key_show_state(): 214 | global selected_api_key 215 | show = show_api_key_var.get() 216 | if show: 217 | api_key_entry.config(show="") 218 | else: 219 | api_key_entry.config(show="*") 220 | selected_api_key = api_key_entry.get().strip() 221 | 222 | def set_api_base_show_state(): 223 | global api_base 224 | show = show_api_base_var.get() 225 | if show: 226 | api_base_entry.config(show="") 227 | else: 228 | api_base_entry.config(show="*") 229 | api_base = api_base_entry.get().strip() 230 | 231 | def copy_text_to_clipboard(text): 232 | pyperclip.copy(text) 233 | 234 | 235 | # 创建tkinter窗口 236 | root = tk.Tk() 237 | root.title("FREE ChatGPT-----Stream-----MADE-----BY-----中北锋哥-----2024.2.25") 238 | 239 | # 主题和风格 240 | style = ttk.Style() 241 | style.theme_use("alt") 242 | style.configure("TFrame", background="lightblue") 243 | style.configure("TButton", padding=2, relief="flat", foreground="black", background="lightblue") 244 | style.configure("TLabel", padding=1, foreground="black", background="lightblue", font=("Arial", 11)) 245 | style.configure("TEntry", padding=1, foreground="black", insertcolor="lightblue") 246 | style.configure("TCheckbutton", padding=5, font=("Helvetica", 11), background="lightblue", foreground="blue") 247 | 248 | # 创建界面元素 249 | frame_left = ttk.Frame(root) 250 | frame_left.grid(row=0, column=0, padx=5, pady=5, sticky="nsew") 251 | frame_right = ttk.Frame(root) 252 | frame_right.grid(row=0, column=1, padx=5, pady=5, sticky="nsew") 253 | 254 | # 设置权重,使得窗口大小变化时元素可以调整 255 | root.grid_columnconfigure(0, weight=1) 256 | root.grid_rowconfigure(0, weight=1) 257 | 258 | # 设置frame_left的权重,使得元素可以水平拉伸 259 | frame_left.grid_columnconfigure(1, weight=1) 260 | 261 | # 设置frame_right的权重,使得元素可以水平和垂直拉伸 262 | frame_right.grid_columnconfigure(0, weight=1) 263 | frame_right.grid_rowconfigure(1, weight=1) 264 | 265 | model_label = ttk.Label(frame_left, text="选择GPT模型:") 266 | model_label.grid(row=1, column=0, sticky="w") 267 | 268 | model_select = ttk.Combobox(frame_left, values=list(available_models.keys()), state="readonly", font=("黑体", 11)) 269 | model_select.set(selected_model) 270 | model_select.grid(row=1, column=1, padx=(0, 5), sticky="ew") 271 | 272 | system_message_label = ttk.Label(frame_left, text="系统角色:") 273 | system_message_label.grid(row=2, column=0, sticky="w") 274 | 275 | system_message_text = scrolledtext.ScrolledText(frame_left, width=45, height=3, font=("黑体", 12)) 276 | system_message_text.insert(tk.END, system_message) 277 | system_message_text.grid(row=2, column=1, padx=(0, 5), sticky="ew") 278 | 279 | api_base_label = ttk.Label(frame_left, text="API 代理网址(填到v1):") # 新增代理网址配置项 280 | api_base_label.grid(row=3, column=0, sticky="w") 281 | 282 | api_base_entry = ttk.Entry(frame_left, width=50,show="*", font=("Arial", 11)) 283 | api_base_entry.insert(0, api_base) 284 | api_base_entry.grid(row=3, column=1, padx=(0, 5), sticky="ew") 285 | 286 | show_api_base_var = tk.IntVar() 287 | show_api_base_check = ttk.Checkbutton(frame_left, text="显示 API Base", variable=show_api_base_var, command=set_api_base_show_state) 288 | show_api_base_check.grid(row=4, columnspan=2, pady=(5, 0), sticky="w") 289 | 290 | api_key_label = ttk.Label(frame_left, text="API 密钥(sk-/sess-):") 291 | api_key_label.grid(row=5, column=0, sticky="w") 292 | 293 | api_key_entry = ttk.Entry(frame_left, width=50, show="*", font=("Arial", 11)) 294 | api_key_entry.insert(0, selected_api_key) 295 | api_key_entry.grid(row=5, column=1, padx=(0, 5), sticky="ew") 296 | 297 | show_api_key_var = tk.IntVar() 298 | show_api_key_check = ttk.Checkbutton(frame_left, text="显示 API Key", variable=show_api_key_var, command=set_api_key_show_state) 299 | show_api_key_check.grid(row=6, columnspan=2, pady=(5, 0), sticky="w") 300 | 301 | temperature_label = ttk.Label(frame_left, text="Temperature:") 302 | temperature_label.grid(row=7, column=0, sticky="w") 303 | 304 | temperature_entry = ttk.Entry(frame_left, width=45, font=("Arial", 11)) 305 | temperature_entry.insert(0, temperature) 306 | temperature_entry.grid(row=7, column=1, padx=(0, 5), sticky="ew") 307 | 308 | max_tokens_label = ttk.Label(frame_left, text="Max-Tokens:") 309 | max_tokens_label.grid(row=8, column=0, sticky="w") 310 | 311 | max_tokens_entry = ttk.Entry(frame_left, width=45, font=("Arial", 11)) 312 | max_tokens_entry.insert(0, max_tokens) 313 | max_tokens_entry.grid(row=8, column=1, padx=(0, 5), sticky="ew") 314 | 315 | continuous_chat_var = tk.IntVar() 316 | continuous_chat_var.set(continuous_chat) 317 | continuous_chat_check = ttk.Checkbutton(frame_left, text="开启连续对话", variable=continuous_chat_var) 318 | continuous_chat_check.grid(row=9, columnspan=2, pady=(5, 0), sticky="w") 319 | 320 | user_input_label = ttk.Label(frame_left, text="用户消息输入框:") 321 | user_input_label.grid(row=10, column=0, sticky="w") 322 | 323 | # 用户输入框设置为自适应宽度 324 | user_input_text = scrolledtext.ScrolledText(frame_left, height=11, width=40, font=("宋体", 11)) 325 | user_input_text.grid(row=10, column=1, padx=(0, 5), sticky="ew") 326 | user_input_text.grid_columnconfigure(1, weight=1) # 设置输入框列的权重 327 | 328 | response_label = ttk.Label(frame_right, text="对话消息记录框:") 329 | response_label.grid(row=0, column=0, columnspan=2, sticky="w") 330 | 331 | response_text_box = scrolledtext.ScrolledText(frame_right, height=25, width=50, state=tk.DISABLED, font=("宋体", 11)) 332 | response_text_box.grid(row=1, column=0, columnspan=2, sticky="nsew") 333 | response_text_box.grid_columnconfigure(1, weight=1) # 设置输入框列的权重 334 | 335 | # 发送按钮设置为自适应宽度 336 | send_button = ttk.Button(frame_left, text="发送(Ctrl+Enter)", command=get_response, width=15) 337 | send_button.grid(row=11, column=0, padx=(20, 10), pady=(10, 0), sticky="w") 338 | 339 | clear_button = ttk.Button(frame_left, text="清空对话历史", command=clear_history, width=15) 340 | clear_button.grid(row=11, column=1, padx=(10, 20), pady=(10, 0), sticky="n") 341 | 342 | simplified_button = ttk.Button(frame_right, text="显示/隐藏参数", command=toggle_simplified, width=15) 343 | simplified_button.grid(row=2, column=0, padx=(10, 15), pady=(10, 0), sticky="n") 344 | 345 | def on_enter_key(event): 346 | get_response() 347 | 348 | 349 | # 在user_input_text上绑定Enter键事件 350 | user_input_text.bind("", on_enter_key) 351 | 352 | def copy_user_message(): 353 | response_text = response_text_box.get("end-2l linestart", "end-1c") 354 | if response_text.strip() != "": 355 | user_input = response_text.strip().split("\n")[0].replace("用户: ", "") 356 | copy_text_to_clipboard(user_input) 357 | 358 | 359 | def copy_assistant_message(): 360 | response_text = response_text_box.get("end-2l linestart", "end-1c") 361 | if response_text.strip() != "": 362 | assistant_message = response_text.strip().split("\n")[1].replace(f"{available_models[selected_model]}: ", "") 363 | copy_text_to_clipboard(assistant_message) 364 | 365 | 366 | def save_chat_history(): 367 | chat_history = response_text_box.get("1.0", "end-1c") 368 | if chat_history.strip() != "": 369 | current_time = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") 370 | filename = f"chat_history_{current_time}.txt" 371 | with open(filename, "w", encoding="utf-8") as file: 372 | file.write(chat_history) 373 | 374 | 375 | user_copy_button = ttk.Button(frame_left, text="复制用户消息", command=copy_user_message, width=15) 376 | user_copy_button.grid(row=12, column=0, padx=(20, 10), pady=(5, 0), sticky="w") 377 | 378 | assistant_copy_button = ttk.Button(frame_left, text="复制GPT消息", command=copy_assistant_message, width=15) 379 | assistant_copy_button.grid(row=12, column=1, padx=(10, 20), pady=(5, 0), sticky="n") 380 | 381 | save_history_button = ttk.Button(frame_right, text="保存对话记录", command=save_chat_history, width=15) 382 | save_history_button.grid(row=2, column=1, padx=(10, 15), pady=(10, 0), sticky="n") 383 | 384 | 385 | def play_user_message(): 386 | def play_audio(): 387 | engine = pyttsx3.init() 388 | user_message = response_text_box.get("end-2l linestart", "end-1c").strip().split("\n")[0].replace("用户: ", "") 389 | engine.setProperty('rate', 200) 390 | engine.setProperty('volume', 1) 391 | engine.setProperty('pitch', 2) 392 | engine.say(user_message) 393 | engine.runAndWait() 394 | engine.stop() 395 | 396 | audio_thread = threading.Thread(target=play_audio) 397 | audio_thread.start() 398 | play_button1 = ttk.Button(frame_left, text="播放用户消息", command=play_user_message, width=15) 399 | play_button1.grid(row=13, column=0, padx=(20, 10), pady=(5, 0), sticky="w") 400 | 401 | def play_assistant_message(): 402 | def play_audio(): 403 | engine = pyttsx3.init() 404 | assistant_message = response_text_box.get("end-2l linestart", "end-1c").strip().split("\n")[1].replace(f"{available_models[selected_model]}: ", "") 405 | engine.setProperty('rate', 222) 406 | engine.setProperty('volume', 1) 407 | engine.setProperty('pitch', 5) 408 | engine.say(assistant_message) 409 | engine.runAndWait() 410 | engine.stop() 411 | 412 | audio_thread = threading.Thread(target=play_audio) 413 | audio_thread.start() 414 | play_button2 = ttk.Button(frame_left, text="播放GPT回复", command=play_assistant_message, width=15) 415 | play_button2.grid(row=13, column=1, padx=(10, 20), pady=(5, 0), sticky="n") 416 | 417 | 418 | def on_closing(): 419 | save_settings() 420 | save_chat_history() 421 | root.destroy() 422 | 423 | 424 | root.protocol("WM_DELETE_WINDOW", on_closing) 425 | # 运行窗口循环 426 | root.mainloop() 427 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | Flask 2 | requests 3 | openai==0.28.0 4 | gunicorn==21.2.0 5 | --------------------------------------------------------------------------------