├── .github └── ISSUE_TEMPLATE.md ├── .gitignore ├── LICENSE ├── README.md ├── app.py ├── bot ├── baidu │ └── baidu_unit_bot.py ├── bot.py ├── bot_factory.py ├── chatgpt │ └── chat_gpt_bot.py └── openai │ └── open_ai_bot.py ├── bridge └── bridge.py ├── channel ├── channel.py ├── channel_factory.py └── wechat │ ├── wechat_channel.py │ └── wechaty_channel.py ├── common ├── expired_dict.py ├── log.py └── tmp_dir.py ├── config-template.json ├── config.py ├── docker ├── Dockerfile.alpine ├── Dockerfile.debian ├── build.alpine.sh ├── build.debian.sh ├── docker-compose.yaml ├── entrypoint.sh └── sample-chatgpt-on-wechat │ ├── .env │ ├── Makefile │ └── Name ├── docs └── images │ ├── group-chat-sample.jpg │ ├── image-create-sample.jpg │ └── single-chat-sample.jpg ├── requirement.txt ├── scripts ├── shutdown.sh ├── start.sh └── tout.sh └── voice ├── baidu └── baidu_voice.py ├── google └── google_voice.py ├── openai └── openai_voice.py ├── voice.py └── voice_factory.py /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | ### 前置确认 2 | 3 | 1. 网络能够访问openai接口 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351) 4 | 2. python 已安装:版本在 3.7 ~ 3.10 之间,依赖已安装 5 | 3. 在已有 issue 中未搜索到类似问题 6 | 4. [FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) 中无类似问题 7 | 8 | 9 | ### 问题描述 10 | 11 | > 简要说明、截图、复现步骤等,也可以是需求或想法 12 | 13 | 14 | 15 | 16 | ### 终端日志 (如有报错) 17 | 18 | ``` 19 | [在此处粘贴终端日志] 20 | ``` 21 | 22 | 23 | 24 | ### 环境 25 | 26 | - 操作系统类型 (Mac/Windows/Linux): 27 | - Python版本 ( 执行 `python3 -V` ): 28 | - pip版本 ( 依赖问题此项必填,执行 `pip3 -V`): 29 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .idea 3 | __pycache__/ 4 | venv* 5 | *.pyc 6 | config.json 7 | QR.png 8 | nohup.out 9 | tmp 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2022 zhayujie 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to deal 5 | in the Software without restriction, including without limitation the rights 6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 | copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in all 11 | copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 19 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 简介 2 | 3 | > ChatGPT近期以强大的对话和信息整合能力风靡全网,可以写代码、改论文、讲故事,几乎无所不能,这让人不禁有个大胆的想法,能否用他的对话模型把我们的微信打造成一个智能机器人,可以在与好友对话中给出意想不到的回应,而且再也不用担心女朋友影响我们 ~~打游戏~~ 工作了。 4 | 5 | 6 | 基于ChatGPT的微信聊天机器人,通过 [ChatGPT](https://github.com/openai/openai-python) 接口生成对话内容,使用 [itchat](https://github.com/littlecodersh/ItChat) 实现微信消息的接收和自动回复。已实现的特性如下: 7 | 8 | - [x] **文本对话:** 接收私聊及群组中的微信消息,使用ChatGPT生成回复内容,完成自动回复 9 | - [x] **规则定制化:** 支持私聊中按指定规则触发自动回复,支持对群组设置自动回复白名单 10 | - [x] **多账号:** 支持多微信账号同时运行 11 | - [x] **图片生成:** 支持根据描述生成图片,并自动发送至个人聊天或群聊 12 | - [x] **上下文记忆**:支持多轮对话记忆,且为每个好友维护独立的上下会话 13 | - [x] **语音识别:** 支持接收和处理语音消息,通过文字或语音回复 14 | 15 | 16 | # 更新日志 17 | 18 | >**2023.03.09:** 基于 `whisper API` 实现对微信语音消息的解析和回复,添加配置项 `"speech_recognition":true` 即可启用,使用参考 [#415](https://github.com/zhayujie/chatgpt-on-wechat/issues/415)。(contributed by [wanggang1987](https://github.com/wanggang1987) in [#385](https://github.com/zhayujie/chatgpt-on-wechat/pull/385)) 19 | 20 | >**2023.03.02:** 接入[ChatGPT API](https://platform.openai.com/docs/guides/chat) (gpt-3.5-turbo),默认使用该模型进行对话,需升级openai依赖 (`pip3 install --upgrade openai`)。网络问题参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351) 21 | 22 | >**2023.02.20:** 增加 [python-wechaty](https://github.com/wechaty/python-wechaty) 作为可选渠道,使用Pad协议相对稳定,但Token收费 (使用参考[#244](https://github.com/zhayujie/chatgpt-on-wechat/pull/244),contributed by [ZQ7](https://github.com/ZQ7)) 23 | 24 | >**2023.02.09:** 扫码登录存在封号风险,请谨慎使用,参考[#58](https://github.com/AutumnWhj/ChatGPT-wechat-bot/issues/158) 25 | 26 | >**2023.02.05:** 在openai官方接口方案中 (GPT-3模型) 实现上下文对话 27 | 28 | >**2022.12.19:** 引入 [itchat-uos](https://github.com/why2lyj/ItChat-UOS) 替换 itchat,解决由于不能登录网页微信而无法使用的问题,且解决Python3.9的兼容问题 29 | 30 | >**2022.12.18:** 支持根据描述生成图片并发送,openai版本需大于0.25.0 31 | 32 | >**2022.12.17:** 原来的方案是从 [ChatGPT页面](https://chat.openai.com/chat) 获取session_token,使用 [revChatGPT](https://github.com/acheong08/ChatGPT) 直接访问web接口,但随着ChatGPT接入Cloudflare人机验证,这一方案难以在服务器顺利运行。 所以目前使用的方案是调用 OpenAI 官方提供的 [API](https://beta.openai.com/docs/api-reference/introduction),回复质量上基本接近于ChatGPT的内容,劣势是暂不支持有上下文记忆的对话,优势是稳定性和响应速度较好。 33 | 34 | # 使用效果 35 | 36 | ### 个人聊天 37 | 38 | ![single-chat-sample.jpg](docs/images/single-chat-sample.jpg) 39 | 40 | ### 群组聊天 41 | 42 | ![group-chat-sample.jpg](docs/images/group-chat-sample.jpg) 43 | 44 | ### 图片生成 45 | 46 | ![group-chat-sample.jpg](docs/images/image-create-sample.jpg) 47 | 48 | 49 | # 快速开始 50 | 51 | ## 准备 52 | 53 | ### 1. OpenAI账号注册 54 | 55 | 前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,参考这篇 [教程](https://www.pythonthree.com/register-openai-chatgpt/) 可以通过虚拟手机号来接收验证码。创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。 56 | 57 | > 项目中使用的对话模型是 davinci,计费方式是约每 750 字 (包含请求和回复) 消耗 $0.02,图片生成是每张消耗 $0.016,账号创建有免费的 $18 额度,使用完可以更换邮箱重新注册。 58 | 59 | 60 | ### 2.运行环境 61 | 62 | 支持 Linux、MacOS、Windows 系统(可在Linux服务器上长期运行),同时需安装 `Python`。 63 | > 建议Python版本在 3.7.1~3.9.X 之间,3.10及以上版本在 MacOS 可用,其他系统上不确定能否正常运行。 64 | 65 | 66 | 1.克隆项目代码: 67 | 68 | ```bash 69 | git clone https://github.com/zhayujie/chatgpt-on-wechat 70 | cd chatgpt-on-wechat/ 71 | ``` 72 | 73 | 2.安装所需核心依赖: 74 | 75 | ```bash 76 | pip3 install itchat-uos==1.5.0.dev0 77 | pip3 install --upgrade openai 78 | ``` 79 | 注:`itchat-uos`使用指定版本1.5.0.dev0,`openai`使用最新版本,需高于0.27.0。 80 | 81 | 82 | ## 配置 83 | 84 | 配置文件的模板在根目录的`config-template.json`中,需复制该模板创建最终生效的 `config.json` 文件: 85 | 86 | ```bash 87 | cp config-template.json config.json 88 | ``` 89 | 90 | 然后在`config.json`中填入配置,以下是对默认配置的说明,可根据需要进行自定义修改: 91 | 92 | ```bash 93 | # config.json文件内容示例 94 | { 95 | "open_ai_api_key": "YOUR API KEY", # 填入上面创建的 OpenAI API KEY 96 | "proxy": "127.0.0.1:7890", # 代理客户端的ip和端口 97 | "single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复 98 | "single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人 99 | "group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复 100 | "group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"], # 开启自动回复的群名称列表 101 | "image_create_prefix": ["画", "看", "找"], # 开启图片回复的前缀 102 | "conversation_max_tokens": 1000, # 支持上下文记忆的最多字符数 103 | "character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。" # 人格描述 104 | } 105 | ``` 106 | **配置说明:** 107 | 108 | **1.个人聊天** 109 | 110 | + 个人聊天中,需要以 "bot"或"@bot" 为开头的内容触发机器人,对应配置项 `single_chat_prefix` (如果不需要以前缀触发可以填写 `"single_chat_prefix": [""]`) 111 | + 机器人回复的内容会以 "[bot] " 作为前缀, 以区分真人,对应的配置项为 `single_chat_reply_prefix` (如果不需要前缀可以填写 `"single_chat_reply_prefix": ""`) 112 | 113 | **2.群组聊天** 114 | 115 | + 群组聊天中,群名称需配置在 `group_name_white_list ` 中才能开启群聊自动回复。如果想对所有群聊生效,可以直接填写 `"group_name_white_list": ["ALL_GROUP"]` 116 | + 默认只要被人 @ 就会触发机器人自动回复;另外群聊天中只要检测到以 "@bot" 开头的内容,同样会自动回复(方便自己触发),这对应配置项 `group_chat_prefix` 117 | + 可选配置: `group_name_keyword_white_list`配置项支持模糊匹配群名称,`group_chat_keyword`配置项则支持模糊匹配群消息内容,用法与上述两个配置项相同。(Contributed by [evolay](https://github.com/evolay)) 118 | 119 | **3.语音识别** 120 | + 添加 `"speech_recognition": true` 将开启语音识别,默认使用openai的whisper模型识别为文字,同时以文字回复,目前只支持私聊 (注意由于语音消息无法匹配前缀,一旦开启将对所有语音自动回复); 121 | + 添加 `"voice_reply_voice": true` 将开启语音回复语音,但是需要配置对应语音合成平台的key,由于itchat协议的限制,只能发送语音mp3文件,若使用wechaty则回复的是微信语音。 122 | 123 | **4.其他配置** 124 | 125 | + `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351) 126 | + 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix ` 127 | + 关于OpenAI对话及图片接口的参数配置(内容自由度、回复字数限制、图片大小等),可以参考 [对话接口](https://beta.openai.com/docs/api-reference/completions) 和 [图像接口](https://beta.openai.com/docs/api-reference/completions) 文档直接在 [代码](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/bot/openai/open_ai_bot.py) `bot/openai/open_ai_bot.py` 中进行调整。 128 | + `conversation_max_tokens`:表示能够记忆的上下文最大字数(一问一答为一组对话,如果累积的对话字数超出限制,就会优先移除最早的一组对话) 129 | + `character_desc` 配置中保存着你对机器人说的一段话,他会记住这段话并作为他的设定,你可以为他定制任何人格 (关于会话上下文的更多内容参考该 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/43)) 130 | 131 | 132 | ## 运行 133 | 134 | ### 1.本地运行 135 | 136 | 如果是开发机 **本地运行**,直接在项目根目录下执行: 137 | 138 | ```bash 139 | python3 app.py 140 | ``` 141 | 终端输出二维码后,使用微信进行扫码,当输出 "Start auto replying" 时表示自动回复程序已经成功运行了(注意:用于登录的微信需要在支付处已完成实名认证)。扫码登录后你的账号就成为机器人了,可以在微信手机端通过配置的关键词触发自动回复 (任意好友发送消息给你,或是自己发消息给好友),参考[#142](https://github.com/zhayujie/chatgpt-on-wechat/issues/142)。 142 | 143 | 144 | ### 2.服务器部署 145 | 146 | 使用nohup命令在后台运行程序: 147 | 148 | ```bash 149 | touch nohup.out # 首次运行需要新建日志文件 150 | nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通过日志输出二维码 151 | ``` 152 | 扫码登录后程序即可运行于服务器后台,此时可通过 `ctrl+c` 关闭日志,不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程,如果想要重新启动程序可以先 `kill` 掉对应的进程。日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`。 153 | scripts/目录有相应的脚本可以调用 154 | 155 | > **注意:** 如果 扫码后手机提示登录验证需要等待5s,而终端的二维码再次刷新并提示 `Log in time out, reloading QR code`,此时需参考此 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/8) 修改一行代码即可解决。 156 | 157 | > **多账号支持:** 将 项目复制多份,分别启动程序,用不同账号扫码登录即可实现同时运行。 158 | 159 | > **特殊指令:** 用户向机器人发送 **#清除记忆** 即可清空该用户的上下文记忆。 160 | 161 | 162 | ### 3.Docker部署 163 | 164 | 参考文档 [Docker部署](https://github.com/limccn/chatgpt-on-wechat/wiki/Docker%E9%83%A8%E7%BD%B2) (Contributed by [limccn](https://github.com/limccn))。 165 | 166 | 167 | ## 常见问题 168 | 169 | FAQs: 170 | 171 | 172 | ## 联系 173 | 174 | 欢迎提交PR、Issues,以及Star支持一下。程序运行遇到问题优先查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,其次前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索,若无相似问题可创建Issue,或加微信 eijuyahz 交流。 175 | 176 | 177 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | # encoding:utf-8 2 | 3 | import config 4 | from channel import channel_factory 5 | from common.log import logger 6 | 7 | 8 | if __name__ == '__main__': 9 | try: 10 | # load config 11 | config.load_config() 12 | 13 | # create channel 14 | channel = channel_factory.create_channel("wx") 15 | 16 | # startup channel 17 | channel.startup() 18 | except Exception as e: 19 | logger.error("App startup failed!") 20 | logger.exception(e) 21 | -------------------------------------------------------------------------------- /bot/baidu/baidu_unit_bot.py: -------------------------------------------------------------------------------- 1 | # encoding:utf-8 2 | 3 | import requests 4 | from bot.bot import Bot 5 | 6 | 7 | # Baidu Unit对话接口 (可用, 但能力较弱) 8 | class BaiduUnitBot(Bot): 9 | def reply(self, query, context=None): 10 | token = self.get_token() 11 | url = 'https://aip.baidubce.com/rpc/2.0/unit/service/v3/chat?access_token=' + token 12 | post_data = "{\"version\":\"3.0\",\"service_id\":\"S73177\",\"session_id\":\"\",\"log_id\":\"7758521\",\"skill_ids\":[\"1221886\"],\"request\":{\"terminal_id\":\"88888\",\"query\":\"" + query + "\", \"hyper_params\": {\"chat_custom_bot_profile\": 1}}}" 13 | print(post_data) 14 | headers = {'content-type': 'application/x-www-form-urlencoded'} 15 | response = requests.post(url, data=post_data.encode(), headers=headers) 16 | if response: 17 | return response.json()['result']['context']['SYS_PRESUMED_HIST'][1] 18 | 19 | def get_token(self): 20 | access_key = 'YOUR_ACCESS_KEY' 21 | secret_key = 'YOUR_SECRET_KEY' 22 | host = 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=' + access_key + '&client_secret=' + secret_key 23 | response = requests.get(host) 24 | if response: 25 | print(response.json()) 26 | return response.json()['access_token'] 27 | -------------------------------------------------------------------------------- /bot/bot.py: -------------------------------------------------------------------------------- 1 | """ 2 | Auto-replay chat robot abstract class 3 | """ 4 | 5 | 6 | class Bot(object): 7 | def reply(self, query, context=None): 8 | """ 9 | bot auto-reply content 10 | :param req: received message 11 | :return: reply content 12 | """ 13 | raise NotImplementedError 14 | -------------------------------------------------------------------------------- /bot/bot_factory.py: -------------------------------------------------------------------------------- 1 | """ 2 | channel factory 3 | """ 4 | 5 | 6 | def create_bot(bot_type): 7 | """ 8 | create a channel instance 9 | :param channel_type: channel type code 10 | :return: channel instance 11 | """ 12 | if bot_type == 'baidu': 13 | # Baidu Unit对话接口 14 | from bot.baidu.baidu_unit_bot import BaiduUnitBot 15 | return BaiduUnitBot() 16 | 17 | elif bot_type == 'chatGPT': 18 | # ChatGPT 网页端web接口 19 | from bot.chatgpt.chat_gpt_bot import ChatGPTBot 20 | return ChatGPTBot() 21 | 22 | elif bot_type == 'openAI': 23 | # OpenAI 官方对话模型API 24 | from bot.openai.open_ai_bot import OpenAIBot 25 | return OpenAIBot() 26 | raise RuntimeError 27 | -------------------------------------------------------------------------------- /bot/chatgpt/chat_gpt_bot.py: -------------------------------------------------------------------------------- 1 | # encoding:utf-8 2 | 3 | from bot.bot import Bot 4 | from config import conf, load_config 5 | from common.log import logger 6 | from common.expired_dict import ExpiredDict 7 | import openai 8 | import time 9 | 10 | if conf().get('expires_in_seconds'): 11 | all_sessions = ExpiredDict(conf().get('expires_in_seconds')) 12 | else: 13 | all_sessions = dict() 14 | 15 | # OpenAI对话模型API (可用) 16 | class ChatGPTBot(Bot): 17 | def __init__(self): 18 | openai.api_key = conf().get('open_ai_api_key') 19 | proxy = conf().get('proxy') 20 | if proxy: 21 | openai.proxy = proxy 22 | 23 | def reply(self, query, context=None): 24 | # acquire reply content 25 | if not context or not context.get('type') or context.get('type') == 'TEXT': 26 | logger.info("[OPEN_AI] query={}".format(query)) 27 | session_id = context.get('session_id') or context.get('from_user_id') 28 | if query == '#清除记忆': 29 | Session.clear_session(session_id) 30 | return '记忆已清除' 31 | elif query == '#清除所有': 32 | Session.clear_all_session() 33 | return '所有人记忆已清除' 34 | elif query == '#更新配置': 35 | load_config() 36 | return '配置已更新' 37 | 38 | session = Session.build_session_query(query, session_id) 39 | logger.debug("[OPEN_AI] session query={}".format(session)) 40 | 41 | # if context.get('stream'): 42 | # # reply in stream 43 | # return self.reply_text_stream(query, new_query, session_id) 44 | 45 | reply_content = self.reply_text(session, session_id, 0) 46 | logger.debug("[OPEN_AI] new_query={}, session_id={}, reply_cont={}".format(session, session_id, reply_content["content"])) 47 | if reply_content["completion_tokens"] > 0: 48 | Session.save_session(reply_content["content"], session_id, reply_content["total_tokens"]) 49 | return reply_content["content"] 50 | 51 | elif context.get('type', None) == 'IMAGE_CREATE': 52 | return self.create_img(query, 0) 53 | 54 | def reply_text(self, session, session_id, retry_count=0) ->dict: 55 | ''' 56 | call openai's ChatCompletion to get the answer 57 | :param session: a conversation session 58 | :param session_id: session id 59 | :param retry_count: retry count 60 | :return: {} 61 | ''' 62 | try: 63 | response = openai.ChatCompletion.create( 64 | model="gpt-3.5-turbo", # 对话模型的名称 65 | messages=session, 66 | temperature=0.9, # 值在[0,1]之间,越大表示回复越具有不确定性 67 | #max_tokens=4096, # 回复最大的字符数 68 | top_p=1, 69 | frequency_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容 70 | presence_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容 71 | ) 72 | # logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"])) 73 | return {"total_tokens": response["usage"]["total_tokens"], 74 | "completion_tokens": response["usage"]["completion_tokens"], 75 | "content": response.choices[0]['message']['content']} 76 | except openai.error.RateLimitError as e: 77 | # rate limit exception 78 | logger.warn(e) 79 | if retry_count < 1: 80 | time.sleep(5) 81 | logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1)) 82 | return self.reply_text(session, session_id, retry_count+1) 83 | else: 84 | return {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"} 85 | except openai.error.APIConnectionError as e: 86 | # api connection exception 87 | logger.warn(e) 88 | logger.warn("[OPEN_AI] APIConnection failed") 89 | return {"completion_tokens": 0, "content":"我连接不到你的网络"} 90 | except openai.error.Timeout as e: 91 | logger.warn(e) 92 | logger.warn("[OPEN_AI] Timeout") 93 | return {"completion_tokens": 0, "content":"我没有收到你的消息"} 94 | except Exception as e: 95 | # unknown exception 96 | logger.exception(e) 97 | Session.clear_session(session_id) 98 | return {"completion_tokens": 0, "content": "请再问我一次吧"} 99 | 100 | def create_img(self, query, retry_count=0): 101 | try: 102 | logger.info("[OPEN_AI] image_query={}".format(query)) 103 | response = openai.Image.create( 104 | prompt=query, #图片描述 105 | n=1, #每次生成图片的数量 106 | size="256x256" #图片大小,可选有 256x256, 512x512, 1024x1024 107 | ) 108 | image_url = response['data'][0]['url'] 109 | logger.info("[OPEN_AI] image_url={}".format(image_url)) 110 | return image_url 111 | except openai.error.RateLimitError as e: 112 | logger.warn(e) 113 | if retry_count < 1: 114 | time.sleep(5) 115 | logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1)) 116 | return self.create_img(query, retry_count+1) 117 | else: 118 | return "提问太快啦,请休息一下再问我吧" 119 | except Exception as e: 120 | logger.exception(e) 121 | return None 122 | 123 | class Session(object): 124 | @staticmethod 125 | def build_session_query(query, session_id): 126 | ''' 127 | build query with conversation history 128 | e.g. [ 129 | {"role": "system", "content": "You are a helpful assistant."}, 130 | {"role": "user", "content": "Who won the world series in 2020?"}, 131 | {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, 132 | {"role": "user", "content": "Where was it played?"} 133 | ] 134 | :param query: query content 135 | :param session_id: session id 136 | :return: query content with conversaction 137 | ''' 138 | session = all_sessions.get(session_id, []) 139 | if len(session) == 0: 140 | system_prompt = conf().get("character_desc", "") 141 | system_item = {'role': 'system', 'content': system_prompt} 142 | session.append(system_item) 143 | all_sessions[session_id] = session 144 | user_item = {'role': 'user', 'content': query} 145 | session.append(user_item) 146 | return session 147 | 148 | @staticmethod 149 | def save_session(answer, session_id, total_tokens): 150 | max_tokens = conf().get("conversation_max_tokens") 151 | if not max_tokens: 152 | # default 3000 153 | max_tokens = 1000 154 | max_tokens=int(max_tokens) 155 | 156 | session = all_sessions.get(session_id) 157 | if session: 158 | # append conversation 159 | gpt_item = {'role': 'assistant', 'content': answer} 160 | session.append(gpt_item) 161 | 162 | # discard exceed limit conversation 163 | Session.discard_exceed_conversation(session, max_tokens, total_tokens) 164 | 165 | 166 | @staticmethod 167 | def discard_exceed_conversation(session, max_tokens, total_tokens): 168 | dec_tokens = int(total_tokens) 169 | # logger.info("prompt tokens used={},max_tokens={}".format(used_tokens,max_tokens)) 170 | while dec_tokens > max_tokens: 171 | # pop first conversation 172 | if len(session) > 3: 173 | session.pop(1) 174 | session.pop(1) 175 | else: 176 | break 177 | dec_tokens = dec_tokens - max_tokens 178 | 179 | @staticmethod 180 | def clear_session(session_id): 181 | all_sessions[session_id] = [] 182 | 183 | @staticmethod 184 | def clear_all_session(): 185 | all_sessions.clear() 186 | -------------------------------------------------------------------------------- /bot/openai/open_ai_bot.py: -------------------------------------------------------------------------------- 1 | # encoding:utf-8 2 | 3 | from bot.bot import Bot 4 | from config import conf 5 | from common.log import logger 6 | import openai 7 | import time 8 | 9 | user_session = dict() 10 | 11 | # OpenAI对话模型API (可用) 12 | class OpenAIBot(Bot): 13 | def __init__(self): 14 | openai.api_key = conf().get('open_ai_api_key') 15 | 16 | 17 | def reply(self, query, context=None): 18 | # acquire reply content 19 | if not context or not context.get('type') or context.get('type') == 'TEXT': 20 | logger.info("[OPEN_AI] query={}".format(query)) 21 | from_user_id = context.get('from_user_id') or context.get('session_id') 22 | if query == '#清除记忆': 23 | Session.clear_session(from_user_id) 24 | return '记忆已清除' 25 | elif query == '#清除所有': 26 | Session.clear_all_session() 27 | return '所有人记忆已清除' 28 | 29 | new_query = Session.build_session_query(query, from_user_id) 30 | logger.debug("[OPEN_AI] session query={}".format(new_query)) 31 | 32 | reply_content = self.reply_text(new_query, from_user_id, 0) 33 | logger.debug("[OPEN_AI] new_query={}, user={}, reply_cont={}".format(new_query, from_user_id, reply_content)) 34 | if reply_content and query: 35 | Session.save_session(query, reply_content, from_user_id) 36 | return reply_content 37 | 38 | elif context.get('type', None) == 'IMAGE_CREATE': 39 | return self.create_img(query, 0) 40 | 41 | def reply_text(self, query, user_id, retry_count=0): 42 | try: 43 | response = openai.Completion.create( 44 | model="text-davinci-003", # 对话模型的名称 45 | prompt=query, 46 | temperature=0.9, # 值在[0,1]之间,越大表示回复越具有不确定性 47 | max_tokens=1200, # 回复最大的字符数 48 | top_p=1, 49 | frequency_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容 50 | presence_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容 51 | stop=["\n\n\n"] 52 | ) 53 | res_content = response.choices[0]['text'].strip().replace('<|endoftext|>', '') 54 | logger.info("[OPEN_AI] reply={}".format(res_content)) 55 | return res_content 56 | except openai.error.RateLimitError as e: 57 | # rate limit exception 58 | logger.warn(e) 59 | if retry_count < 1: 60 | time.sleep(5) 61 | logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1)) 62 | return self.reply_text(query, user_id, retry_count+1) 63 | else: 64 | return "提问太快啦,请休息一下再问我吧" 65 | except Exception as e: 66 | # unknown exception 67 | logger.exception(e) 68 | Session.clear_session(user_id) 69 | return "请再问我一次吧" 70 | 71 | 72 | def create_img(self, query, retry_count=0): 73 | try: 74 | logger.info("[OPEN_AI] image_query={}".format(query)) 75 | response = openai.Image.create( 76 | prompt=query, #图片描述 77 | n=1, #每次生成图片的数量 78 | size="256x256" #图片大小,可选有 256x256, 512x512, 1024x1024 79 | ) 80 | image_url = response['data'][0]['url'] 81 | logger.info("[OPEN_AI] image_url={}".format(image_url)) 82 | return image_url 83 | except openai.error.RateLimitError as e: 84 | logger.warn(e) 85 | if retry_count < 1: 86 | time.sleep(5) 87 | logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1)) 88 | return self.reply_text(query, retry_count+1) 89 | else: 90 | return "提问太快啦,请休息一下再问我吧" 91 | except Exception as e: 92 | logger.exception(e) 93 | return None 94 | 95 | 96 | class Session(object): 97 | @staticmethod 98 | def build_session_query(query, user_id): 99 | ''' 100 | build query with conversation history 101 | e.g. Q: xxx 102 | A: xxx 103 | Q: xxx 104 | :param query: query content 105 | :param user_id: from user id 106 | :return: query content with conversaction 107 | ''' 108 | prompt = conf().get("character_desc", "") 109 | if prompt: 110 | prompt += "<|endoftext|>\n\n\n" 111 | session = user_session.get(user_id, None) 112 | if session: 113 | for conversation in session: 114 | prompt += "Q: " + conversation["question"] + "\n\n\nA: " + conversation["answer"] + "<|endoftext|>\n" 115 | prompt += "Q: " + query + "\nA: " 116 | return prompt 117 | else: 118 | return prompt + "Q: " + query + "\nA: " 119 | 120 | @staticmethod 121 | def save_session(query, answer, user_id): 122 | max_tokens = conf().get("conversation_max_tokens") 123 | if not max_tokens: 124 | # default 3000 125 | max_tokens = 1000 126 | conversation = dict() 127 | conversation["question"] = query 128 | conversation["answer"] = answer 129 | session = user_session.get(user_id) 130 | logger.debug(conversation) 131 | logger.debug(session) 132 | if session: 133 | # append conversation 134 | session.append(conversation) 135 | else: 136 | # create session 137 | queue = list() 138 | queue.append(conversation) 139 | user_session[user_id] = queue 140 | 141 | # discard exceed limit conversation 142 | Session.discard_exceed_conversation(user_session[user_id], max_tokens) 143 | 144 | 145 | @staticmethod 146 | def discard_exceed_conversation(session, max_tokens): 147 | count = 0 148 | count_list = list() 149 | for i in range(len(session)-1, -1, -1): 150 | # count tokens of conversation list 151 | history_conv = session[i] 152 | count += len(history_conv["question"]) + len(history_conv["answer"]) 153 | count_list.append(count) 154 | 155 | for c in count_list: 156 | if c > max_tokens: 157 | # pop first conversation 158 | session.pop(0) 159 | 160 | @staticmethod 161 | def clear_session(user_id): 162 | user_session[user_id] = [] 163 | 164 | @staticmethod 165 | def clear_all_session(): 166 | user_session.clear() 167 | -------------------------------------------------------------------------------- /bridge/bridge.py: -------------------------------------------------------------------------------- 1 | from bot import bot_factory 2 | from voice import voice_factory 3 | 4 | 5 | class Bridge(object): 6 | def __init__(self): 7 | pass 8 | 9 | def fetch_reply_content(self, query, context): 10 | return bot_factory.create_bot("chatGPT").reply(query, context) 11 | 12 | def fetch_voice_to_text(self, voiceFile): 13 | return voice_factory.create_voice("openai").voiceToText(voiceFile) 14 | 15 | def fetch_text_to_voice(self, text): 16 | return voice_factory.create_voice("baidu").textToVoice(text) -------------------------------------------------------------------------------- /channel/channel.py: -------------------------------------------------------------------------------- 1 | """ 2 | Message sending channel abstract class 3 | """ 4 | 5 | from bridge.bridge import Bridge 6 | 7 | class Channel(object): 8 | def startup(self): 9 | """ 10 | init channel 11 | """ 12 | raise NotImplementedError 13 | 14 | def handle_text(self, msg): 15 | """ 16 | process received msg 17 | :param msg: message object 18 | """ 19 | raise NotImplementedError 20 | 21 | def send(self, msg, receiver): 22 | """ 23 | send message to user 24 | :param msg: message content 25 | :param receiver: receiver channel account 26 | :return: 27 | """ 28 | raise NotImplementedError 29 | 30 | def build_reply_content(self, query, context=None): 31 | return Bridge().fetch_reply_content(query, context) 32 | 33 | def build_voice_to_text(self, voice_file): 34 | return Bridge().fetch_voice_to_text(voice_file) 35 | 36 | def build_text_to_voice(self, text): 37 | return Bridge().fetch_text_to_voice(text) 38 | -------------------------------------------------------------------------------- /channel/channel_factory.py: -------------------------------------------------------------------------------- 1 | """ 2 | channel factory 3 | """ 4 | 5 | def create_channel(channel_type): 6 | """ 7 | create a channel instance 8 | :param channel_type: channel type code 9 | :return: channel instance 10 | """ 11 | if channel_type == 'wx': 12 | from channel.wechat.wechat_channel import WechatChannel 13 | return WechatChannel() 14 | elif channel_type == 'wxy': 15 | from channel.wechat.wechaty_channel import WechatyChannel 16 | return WechatyChannel() 17 | raise RuntimeError 18 | -------------------------------------------------------------------------------- /channel/wechat/wechat_channel.py: -------------------------------------------------------------------------------- 1 | # encoding:utf-8 2 | 3 | """ 4 | wechat channel 5 | """ 6 | 7 | import itchat 8 | import json 9 | from itchat.content import * 10 | from channel.channel import Channel 11 | from concurrent.futures import ThreadPoolExecutor 12 | from common.log import logger 13 | from common.tmp_dir import TmpDir 14 | from config import conf 15 | import requests 16 | import io 17 | 18 | thread_pool = ThreadPoolExecutor(max_workers=8) 19 | 20 | 21 | @itchat.msg_register(TEXT) 22 | def handler_single_msg(msg): 23 | WechatChannel().handle_text(msg) 24 | return None 25 | 26 | 27 | @itchat.msg_register(TEXT, isGroupChat=True) 28 | def handler_group_msg(msg): 29 | WechatChannel().handle_group(msg) 30 | return None 31 | 32 | 33 | @itchat.msg_register(VOICE) 34 | def handler_single_voice(msg): 35 | WechatChannel().handle_voice(msg) 36 | return None 37 | 38 | 39 | class WechatChannel(Channel): 40 | def __init__(self): 41 | pass 42 | 43 | def startup(self): 44 | # login by scan QRCode 45 | itchat.auto_login(enableCmdQR=2) 46 | 47 | # start message listener 48 | itchat.run() 49 | 50 | def handle_voice(self, msg): 51 | if conf().get('speech_recognition') != True : 52 | return 53 | logger.debug("[WX]receive voice msg: " + msg['FileName']) 54 | thread_pool.submit(self._do_handle_voice, msg) 55 | 56 | def _do_handle_voice(self, msg): 57 | from_user_id = msg['FromUserName'] 58 | other_user_id = msg['User']['UserName'] 59 | if from_user_id == other_user_id: 60 | file_name = TmpDir().path() + msg['FileName'] 61 | msg.download(file_name) 62 | query = super().build_voice_to_text(file_name) 63 | if conf().get('voice_reply_voice'): 64 | self._do_send_voice(query, from_user_id) 65 | else: 66 | self._do_send_text(query, from_user_id) 67 | 68 | def handle_text(self, msg): 69 | logger.debug("[WX]receive text msg: " + json.dumps(msg, ensure_ascii=False)) 70 | content = msg['Text'] 71 | self._handle_single_msg(msg, content) 72 | 73 | def _handle_single_msg(self, msg, content): 74 | from_user_id = msg['FromUserName'] 75 | to_user_id = msg['ToUserName'] # 接收人id 76 | other_user_id = msg['User']['UserName'] # 对手方id 77 | match_prefix = self.check_prefix(content, conf().get('single_chat_prefix')) 78 | if "」\n- - - - - - - - - - - - - - -" in content: 79 | logger.debug("[WX]reference query skipped") 80 | return 81 | if from_user_id == other_user_id and match_prefix is not None: 82 | # 好友向自己发送消息 83 | if match_prefix != '': 84 | str_list = content.split(match_prefix, 1) 85 | if len(str_list) == 2: 86 | content = str_list[1].strip() 87 | 88 | img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) 89 | if img_match_prefix: 90 | content = content.split(img_match_prefix, 1)[1].strip() 91 | thread_pool.submit(self._do_send_img, content, from_user_id) 92 | else : 93 | thread_pool.submit(self._do_send_text, content, from_user_id) 94 | elif to_user_id == other_user_id and match_prefix: 95 | # 自己给好友发送消息 96 | str_list = content.split(match_prefix, 1) 97 | if len(str_list) == 2: 98 | content = str_list[1].strip() 99 | img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) 100 | if img_match_prefix: 101 | content = content.split(img_match_prefix, 1)[1].strip() 102 | thread_pool.submit(self._do_send_img, content, to_user_id) 103 | else: 104 | thread_pool.submit(self._do_send_text, content, to_user_id) 105 | 106 | 107 | def handle_group(self, msg): 108 | logger.debug("[WX]receive group msg: " + json.dumps(msg, ensure_ascii=False)) 109 | group_name = msg['User'].get('NickName', None) 110 | group_id = msg['User'].get('UserName', None) 111 | if not group_name: 112 | return "" 113 | origin_content = msg['Content'] 114 | content = msg['Content'] 115 | content_list = content.split(' ', 1) 116 | context_special_list = content.split('\u2005', 1) 117 | if len(context_special_list) == 2: 118 | content = context_special_list[1] 119 | elif len(content_list) == 2: 120 | content = content_list[1] 121 | if "」\n- - - - - - - - - - - - - - -" in content: 122 | logger.debug("[WX]reference query skipped") 123 | return "" 124 | config = conf() 125 | match_prefix = (msg['IsAt'] and not config.get("group_at_off", False)) or self.check_prefix(origin_content, config.get('group_chat_prefix')) \ 126 | or self.check_contain(origin_content, config.get('group_chat_keyword')) 127 | if ('ALL_GROUP' in config.get('group_name_white_list') or group_name in config.get('group_name_white_list') or self.check_contain(group_name, config.get('group_name_keyword_white_list'))) and match_prefix: 128 | img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) 129 | if img_match_prefix: 130 | content = content.split(img_match_prefix, 1)[1].strip() 131 | thread_pool.submit(self._do_send_img, content, group_id) 132 | else: 133 | thread_pool.submit(self._do_send_group, content, msg) 134 | 135 | def send(self, msg, receiver): 136 | itchat.send(msg, toUserName=receiver) 137 | logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver)) 138 | 139 | def _do_send_voice(self, query, reply_user_id): 140 | try: 141 | if not query: 142 | return 143 | context = dict() 144 | context['from_user_id'] = reply_user_id 145 | reply_text = super().build_reply_content(query, context) 146 | if reply_text: 147 | replyFile = super().build_text_to_voice(reply_text) 148 | itchat.send_file(replyFile, toUserName=reply_user_id) 149 | logger.info('[WX] sendFile={}, receiver={}'.format(replyFile, reply_user_id)) 150 | except Exception as e: 151 | logger.exception(e) 152 | 153 | def _do_send_text(self, query, reply_user_id): 154 | try: 155 | if not query: 156 | return 157 | context = dict() 158 | context['session_id'] = reply_user_id 159 | reply_text = super().build_reply_content(query, context) 160 | if reply_text: 161 | self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id) 162 | except Exception as e: 163 | logger.exception(e) 164 | 165 | def _do_send_img(self, query, reply_user_id): 166 | try: 167 | if not query: 168 | return 169 | context = dict() 170 | context['type'] = 'IMAGE_CREATE' 171 | img_url = super().build_reply_content(query, context) 172 | if not img_url: 173 | return 174 | 175 | # 图片下载 176 | pic_res = requests.get(img_url, stream=True) 177 | image_storage = io.BytesIO() 178 | for block in pic_res.iter_content(1024): 179 | image_storage.write(block) 180 | image_storage.seek(0) 181 | 182 | # 图片发送 183 | itchat.send_image(image_storage, reply_user_id) 184 | logger.info('[WX] sendImage, receiver={}'.format(reply_user_id)) 185 | except Exception as e: 186 | logger.exception(e) 187 | 188 | def _do_send_group(self, query, msg): 189 | if not query: 190 | return 191 | context = dict() 192 | group_name = msg['User']['NickName'] 193 | group_id = msg['User']['UserName'] 194 | group_chat_in_one_session = conf().get('group_chat_in_one_session', []) 195 | if ('ALL_GROUP' in group_chat_in_one_session or \ 196 | group_name in group_chat_in_one_session or \ 197 | self.check_contain(group_name, group_chat_in_one_session)): 198 | context['session_id'] = group_id 199 | else: 200 | context['session_id'] = msg['ActualUserName'] 201 | reply_text = super().build_reply_content(query, context) 202 | if reply_text: 203 | reply_text = '@' + msg['ActualNickName'] + ' ' + reply_text.strip() 204 | self.send(conf().get("group_chat_reply_prefix", "") + reply_text, group_id) 205 | 206 | 207 | def check_prefix(self, content, prefix_list): 208 | for prefix in prefix_list: 209 | if content.startswith(prefix): 210 | return prefix 211 | return None 212 | 213 | 214 | def check_contain(self, content, keyword_list): 215 | if not keyword_list: 216 | return None 217 | for ky in keyword_list: 218 | if content.find(ky) != -1: 219 | return True 220 | return None 221 | 222 | -------------------------------------------------------------------------------- /channel/wechat/wechaty_channel.py: -------------------------------------------------------------------------------- 1 | # encoding:utf-8 2 | 3 | """ 4 | wechaty channel 5 | Python Wechaty - https://github.com/wechaty/python-wechaty 6 | """ 7 | import io 8 | import os 9 | import json 10 | import time 11 | import asyncio 12 | import requests 13 | from typing import Optional, Union 14 | from wechaty_puppet import MessageType, FileBox, ScanStatus # type: ignore 15 | from wechaty import Wechaty, Contact 16 | from wechaty.user import Message, Room, MiniProgram, UrlLink 17 | from channel.channel import Channel 18 | from common.log import logger 19 | from config import conf 20 | 21 | 22 | class WechatyChannel(Channel): 23 | 24 | def __init__(self): 25 | pass 26 | 27 | def startup(self): 28 | asyncio.run(self.main()) 29 | 30 | async def main(self): 31 | config = conf() 32 | # 使用PadLocal协议 比较稳定(免费web协议 os.environ['WECHATY_PUPPET_SERVICE_ENDPOINT'] = '127.0.0.1:8080') 33 | token = config.get('wechaty_puppet_service_token') 34 | os.environ['WECHATY_PUPPET_SERVICE_TOKEN'] = token 35 | global bot 36 | bot = Wechaty() 37 | 38 | bot.on('scan', self.on_scan) 39 | bot.on('login', self.on_login) 40 | bot.on('message', self.on_message) 41 | await bot.start() 42 | 43 | async def on_login(self, contact: Contact): 44 | logger.info('[WX] login user={}'.format(contact)) 45 | 46 | async def on_scan(self, status: ScanStatus, qr_code: Optional[str] = None, 47 | data: Optional[str] = None): 48 | contact = self.Contact.load(self.contact_id) 49 | logger.info('[WX] scan user={}, scan status={}, scan qr_code={}'.format(contact, status.name, qr_code)) 50 | # print(f'user <{contact}> scan status: {status.name} , 'f'qr_code: {qr_code}') 51 | 52 | async def on_message(self, msg: Message): 53 | """ 54 | listen for message event 55 | """ 56 | from_contact = msg.talker() # 获取消息的发送者 57 | to_contact = msg.to() # 接收人 58 | room = msg.room() # 获取消息来自的群聊. 如果消息不是来自群聊, 则返回None 59 | from_user_id = from_contact.contact_id 60 | to_user_id = to_contact.contact_id # 接收人id 61 | # other_user_id = msg['User']['UserName'] # 对手方id 62 | content = msg.text() 63 | mention_content = await msg.mention_text() # 返回过滤掉@name后的消息 64 | match_prefix = self.check_prefix(content, conf().get('single_chat_prefix')) 65 | conversation: Union[Room, Contact] = from_contact if room is None else room 66 | 67 | if room is None and msg.type() == MessageType.MESSAGE_TYPE_TEXT: 68 | if not msg.is_self() and match_prefix is not None: 69 | # 好友向自己发送消息 70 | if match_prefix != '': 71 | str_list = content.split(match_prefix, 1) 72 | if len(str_list) == 2: 73 | content = str_list[1].strip() 74 | 75 | img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) 76 | if img_match_prefix: 77 | content = content.split(img_match_prefix, 1)[1].strip() 78 | await self._do_send_img(content, from_user_id) 79 | else: 80 | await self._do_send(content, from_user_id) 81 | elif msg.is_self() and match_prefix: 82 | # 自己给好友发送消息 83 | str_list = content.split(match_prefix, 1) 84 | if len(str_list) == 2: 85 | content = str_list[1].strip() 86 | img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) 87 | if img_match_prefix: 88 | content = content.split(img_match_prefix, 1)[1].strip() 89 | await self._do_send_img(content, to_user_id) 90 | else: 91 | await self._do_send(content, to_user_id) 92 | elif room and msg.type() == MessageType.MESSAGE_TYPE_TEXT: 93 | # 群组&文本消息 94 | room_id = room.room_id 95 | room_name = await room.topic() 96 | from_user_id = from_contact.contact_id 97 | from_user_name = from_contact.name 98 | is_at = await msg.mention_self() 99 | content = mention_content 100 | config = conf() 101 | match_prefix = (is_at and not config.get("group_at_off", False)) \ 102 | or self.check_prefix(content, config.get('group_chat_prefix')) \ 103 | or self.check_contain(content, config.get('group_chat_keyword')) 104 | if ('ALL_GROUP' in config.get('group_name_white_list') or room_name in config.get( 105 | 'group_name_white_list') or self.check_contain(room_name, config.get( 106 | 'group_name_keyword_white_list'))) and match_prefix: 107 | img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) 108 | if img_match_prefix: 109 | content = content.split(img_match_prefix, 1)[1].strip() 110 | await self._do_send_group_img(content, room_id) 111 | else: 112 | await self._do_send_group(content, room_id, room_name, from_user_id, from_user_name) 113 | 114 | async def send(self, message: Union[str, Message, FileBox, Contact, UrlLink, MiniProgram], receiver): 115 | logger.info('[WX] sendMsg={}, receiver={}'.format(message, receiver)) 116 | if receiver: 117 | contact = await bot.Contact.find(receiver) 118 | await contact.say(message) 119 | 120 | async def send_group(self, message: Union[str, Message, FileBox, Contact, UrlLink, MiniProgram], receiver): 121 | logger.info('[WX] sendMsg={}, receiver={}'.format(message, receiver)) 122 | if receiver: 123 | room = await bot.Room.find(receiver) 124 | await room.say(message) 125 | 126 | async def _do_send(self, query, reply_user_id): 127 | try: 128 | if not query: 129 | return 130 | context = dict() 131 | context['session_id'] = reply_user_id 132 | reply_text = super().build_reply_content(query, context) 133 | if reply_text: 134 | await self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id) 135 | except Exception as e: 136 | logger.exception(e) 137 | 138 | async def _do_send_img(self, query, reply_user_id): 139 | try: 140 | if not query: 141 | return 142 | context = dict() 143 | context['type'] = 'IMAGE_CREATE' 144 | img_url = super().build_reply_content(query, context) 145 | if not img_url: 146 | return 147 | # 图片下载 148 | # pic_res = requests.get(img_url, stream=True) 149 | # image_storage = io.BytesIO() 150 | # for block in pic_res.iter_content(1024): 151 | # image_storage.write(block) 152 | # image_storage.seek(0) 153 | 154 | # 图片发送 155 | logger.info('[WX] sendImage, receiver={}'.format(reply_user_id)) 156 | t = int(time.time()) 157 | file_box = FileBox.from_url(url=img_url, name=str(t) + '.png') 158 | await self.send(file_box, reply_user_id) 159 | except Exception as e: 160 | logger.exception(e) 161 | 162 | async def _do_send_group(self, query, group_id, group_name, group_user_id, group_user_name): 163 | if not query: 164 | return 165 | context = dict() 166 | group_chat_in_one_session = conf().get('group_chat_in_one_session', []) 167 | if ('ALL_GROUP' in group_chat_in_one_session or \ 168 | group_name in group_chat_in_one_session or \ 169 | self.check_contain(group_name, group_chat_in_one_session)): 170 | context['session_id'] = str(group_id) 171 | else: 172 | context['session_id'] = str(group_id) + '-' + str(group_user_id) 173 | reply_text = super().build_reply_content(query, context) 174 | if reply_text: 175 | reply_text = '@' + group_user_name + ' ' + reply_text.strip() 176 | await self.send_group(conf().get("group_chat_reply_prefix", "") + reply_text, group_id) 177 | 178 | async def _do_send_group_img(self, query, reply_room_id): 179 | try: 180 | if not query: 181 | return 182 | context = dict() 183 | context['type'] = 'IMAGE_CREATE' 184 | img_url = super().build_reply_content(query, context) 185 | if not img_url: 186 | return 187 | # 图片发送 188 | logger.info('[WX] sendImage, receiver={}'.format(reply_room_id)) 189 | t = int(time.time()) 190 | file_box = FileBox.from_url(url=img_url, name=str(t) + '.png') 191 | await self.send_group(file_box, reply_room_id) 192 | except Exception as e: 193 | logger.exception(e) 194 | 195 | def check_prefix(self, content, prefix_list): 196 | for prefix in prefix_list: 197 | if content.startswith(prefix): 198 | return prefix 199 | return None 200 | 201 | def check_contain(self, content, keyword_list): 202 | if not keyword_list: 203 | return None 204 | for ky in keyword_list: 205 | if content.find(ky) != -1: 206 | return True 207 | return None 208 | -------------------------------------------------------------------------------- /common/expired_dict.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime, timedelta 2 | 3 | class ExpiredDict(dict): 4 | def __init__(self, expires_in_seconds): 5 | super().__init__() 6 | self.expires_in_seconds = expires_in_seconds 7 | 8 | def __getitem__(self, key): 9 | value, expiry_time = super().__getitem__(key) 10 | if datetime.now() > expiry_time: 11 | del self[key] 12 | raise KeyError("expired {}".format(key)) 13 | self.__setitem__(key, value) 14 | return value 15 | 16 | def __setitem__(self, key, value): 17 | expiry_time = datetime.now() + timedelta(seconds=self.expires_in_seconds) 18 | super().__setitem__(key, (value, expiry_time)) 19 | def get(self, key, default=None): 20 | try: 21 | return self[key] 22 | except KeyError: 23 | return default -------------------------------------------------------------------------------- /common/log.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import sys 3 | 4 | 5 | def _get_logger(): 6 | log = logging.getLogger('log') 7 | log.setLevel(logging.INFO) 8 | console_handle = logging.StreamHandler(sys.stdout) 9 | console_handle.setFormatter(logging.Formatter('[%(levelname)s][%(asctime)s][%(filename)s:%(lineno)d] - %(message)s', 10 | datefmt='%Y-%m-%d %H:%M:%S')) 11 | log.addHandler(console_handle) 12 | return log 13 | 14 | 15 | # 日志句柄 16 | logger = _get_logger() -------------------------------------------------------------------------------- /common/tmp_dir.py: -------------------------------------------------------------------------------- 1 | 2 | import os 3 | import pathlib 4 | from config import conf 5 | 6 | 7 | class TmpDir(object): 8 | """A temporary directory that is deleted when the object is destroyed. 9 | """ 10 | 11 | tmpFilePath = pathlib.Path('./tmp/') 12 | 13 | def __init__(self): 14 | pathExists = os.path.exists(self.tmpFilePath) 15 | if not pathExists and conf().get('speech_recognition') == True: 16 | os.makedirs(self.tmpFilePath) 17 | 18 | def path(self): 19 | return str(self.tmpFilePath) + '/' 20 | -------------------------------------------------------------------------------- /config-template.json: -------------------------------------------------------------------------------- 1 | { 2 | "open_ai_api_key": "YOUR API KEY", 3 | "proxy": "", 4 | "single_chat_prefix": ["bot", "@bot"], 5 | "single_chat_reply_prefix": "[bot] ", 6 | "group_chat_prefix": ["@bot"], 7 | "group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"], 8 | "image_create_prefix": ["画", "看", "找"], 9 | "conversation_max_tokens": 1000, 10 | "speech_recognition": false, 11 | "character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。", 12 | "expires_in_seconds": 3600 13 | } 14 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | # encoding:utf-8 2 | 3 | import json 4 | import os 5 | from common.log import logger 6 | 7 | config = {} 8 | 9 | 10 | def load_config(): 11 | global config 12 | config_path = "config.json" 13 | if not os.path.exists(config_path): 14 | raise Exception('配置文件不存在,请根据config-template.json模板创建config.json文件') 15 | 16 | config_str = read_file(config_path) 17 | # 将json字符串反序列化为dict类型 18 | config = json.loads(config_str) 19 | logger.info("[INIT] load config: {}".format(config)) 20 | 21 | 22 | 23 | def get_root(): 24 | return os.path.dirname(os.path.abspath( __file__ )) 25 | 26 | 27 | def read_file(path): 28 | with open(path, mode='r', encoding='utf-8') as f: 29 | return f.read() 30 | 31 | 32 | def conf(): 33 | return config 34 | -------------------------------------------------------------------------------- /docker/Dockerfile.alpine: -------------------------------------------------------------------------------- 1 | FROM python:3.7.9-alpine 2 | 3 | LABEL maintainer="foo@bar.com" 4 | ARG TZ='Asia/Shanghai' 5 | 6 | ARG CHATGPT_ON_WECHAT_VER 7 | 8 | ENV BUILD_PREFIX=/app \ 9 | BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE' 10 | 11 | RUN apk add --no-cache \ 12 | bash \ 13 | curl \ 14 | wget \ 15 | && export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \ 16 | grep '"tag_name":' | \ 17 | sed -E 's/.*"([^"]+)".*/\1/'`} \ 18 | && wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \ 19 | https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \ 20 | && tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \ 21 | && mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \ 22 | && rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \ 23 | && cd ${BUILD_PREFIX} \ 24 | && cp config-template.json ${BUILD_PREFIX}/config.json \ 25 | && sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json \ 26 | && /usr/local/bin/python -m pip install --no-cache --upgrade pip \ 27 | && pip install --no-cache \ 28 | itchat-uos==1.5.0.dev0 \ 29 | openai \ 30 | && apk del curl wget 31 | 32 | WORKDIR ${BUILD_PREFIX} 33 | 34 | ADD ./entrypoint.sh /entrypoint.sh 35 | 36 | RUN chmod +x /entrypoint.sh \ 37 | && adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \ 38 | && chown noroot:noroot ${BUILD_PREFIX} 39 | 40 | USER noroot 41 | 42 | ENTRYPOINT ["/entrypoint.sh"] 43 | -------------------------------------------------------------------------------- /docker/Dockerfile.debian: -------------------------------------------------------------------------------- 1 | FROM python:3.7.9 2 | 3 | LABEL maintainer="foo@bar.com" 4 | ARG TZ='Asia/Shanghai' 5 | 6 | ARG CHATGPT_ON_WECHAT_VER 7 | 8 | ENV BUILD_PREFIX=/app \ 9 | BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE' 10 | 11 | RUN apt-get update \ 12 | && apt-get install -y --no-install-recommends \ 13 | wget \ 14 | curl \ 15 | && rm -rf /var/lib/apt/lists/* \ 16 | && export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \ 17 | grep '"tag_name":' | \ 18 | sed -E 's/.*"([^"]+)".*/\1/'`} \ 19 | && wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \ 20 | https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \ 21 | && tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \ 22 | && mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \ 23 | && rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \ 24 | && cd ${BUILD_PREFIX} \ 25 | && cp config-template.json ${BUILD_PREFIX}/config.json \ 26 | && sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json \ 27 | && /usr/local/bin/python -m pip install --no-cache --upgrade pip \ 28 | && pip install --no-cache \ 29 | itchat-uos==1.5.0.dev0 \ 30 | openai 31 | 32 | WORKDIR ${BUILD_PREFIX} 33 | 34 | ADD ./entrypoint.sh /entrypoint.sh 35 | 36 | RUN chmod +x /entrypoint.sh \ 37 | && groupadd -r noroot \ 38 | && useradd -r -g noroot -s /bin/bash -d /home/noroot noroot \ 39 | && chown -R noroot:noroot ${BUILD_PREFIX} 40 | 41 | USER noroot 42 | 43 | ENTRYPOINT ["/entrypoint.sh"] 44 | -------------------------------------------------------------------------------- /docker/build.alpine.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # fetch latest release tag 4 | CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \ 5 | grep '"tag_name":' | \ 6 | sed -E 's/.*"([^"]+)".*/\1/'` 7 | 8 | # build image 9 | docker build -f Dockerfile.alpine \ 10 | --build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \ 11 | -t zhayujie/chatgpt-on-wechat . 12 | 13 | # tag image 14 | docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine 15 | -------------------------------------------------------------------------------- /docker/build.debian.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # fetch latest release tag 4 | CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \ 5 | grep '"tag_name":' | \ 6 | sed -E 's/.*"([^"]+)".*/\1/'` 7 | 8 | # build image 9 | docker build -f Dockerfile.debian \ 10 | --build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \ 11 | -t zhayujie/chatgpt-on-wechat . 12 | 13 | # tag image 14 | docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian -------------------------------------------------------------------------------- /docker/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: '2.0' 2 | services: 3 | chatgpt-on-wechat: 4 | build: 5 | context: ./ 6 | dockerfile: Dockerfile.alpine 7 | image: zhayujie/chatgpt-on-wechat 8 | container_name: sample-chatgpt-on-wechat 9 | environment: 10 | OPEN_AI_API_KEY: 'YOUR API KEY' 11 | OPEN_AI_PROXY: '' 12 | SINGLE_CHAT_PREFIX: '["bot", "@bot"]' 13 | SINGLE_CHAT_REPLY_PREFIX: '"[bot] "' 14 | GROUP_CHAT_PREFIX: '["@bot"]' 15 | GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]' 16 | IMAGE_CREATE_PREFIX: '["画", "看", "找"]' 17 | CONVERSATION_MAX_TOKENS: 1000 18 | CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。' 19 | EXPIRES_IN_SECONDS: 3600 -------------------------------------------------------------------------------- /docker/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # build prefix 5 | CHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-""} 6 | # path to config.json 7 | CHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-""} 8 | # execution command line 9 | CHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-""} 10 | 11 | OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-""} 12 | OPEN_AI_PROXY=${OPEN_AI_PROXY:-""} 13 | SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-""} 14 | SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-""} 15 | GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-""} 16 | GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-""} 17 | IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-""} 18 | CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-""} 19 | CHARACTER_DESC=${CHARACTER_DESC:-""} 20 | EXPIRES_IN_SECONDS=${EXPIRES_IN_SECONDS:-""} 21 | 22 | # CHATGPT_ON_WECHAT_PREFIX is empty, use /app 23 | if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then 24 | CHATGPT_ON_WECHAT_PREFIX=/app 25 | fi 26 | 27 | # CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json' 28 | if [ "$CHATGPT_ON_WECHAT_CONFIG_PATH" == "" ] ; then 29 | CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json 30 | fi 31 | 32 | # CHATGPT_ON_WECHAT_EXEC is empty, use ‘python app.py’ 33 | if [ "$CHATGPT_ON_WECHAT_EXEC" == "" ] ; then 34 | CHATGPT_ON_WECHAT_EXEC="python app.py" 35 | fi 36 | 37 | # modify content in config.json 38 | if [ "$OPEN_AI_API_KEY" != "" ] ; then 39 | sed -i "2c \"open_ai_api_key\": \"$OPEN_AI_API_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH 40 | else 41 | echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m" 42 | fi 43 | 44 | # use http_proxy as default 45 | if [ "$HTTP_PROXY" != "" ] ; then 46 | sed -i "3c \"proxy\": \"$HTTP_PROXY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH 47 | fi 48 | 49 | if [ "$OPEN_AI_PROXY" != "" ] ; then 50 | sed -i "3c \"proxy\": \"$OPEN_AI_PROXY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH 51 | fi 52 | 53 | if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then 54 | sed -i "4c \"single_chat_prefix\": $SINGLE_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH 55 | fi 56 | 57 | if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then 58 | sed -i "5c \"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH 59 | fi 60 | 61 | if [ "$GROUP_CHAT_PREFIX" != "" ] ; then 62 | sed -i "6c \"group_chat_prefix\": $GROUP_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH 63 | fi 64 | 65 | if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then 66 | sed -i "7c \"group_name_white_list\": $GROUP_NAME_WHITE_LIST," $CHATGPT_ON_WECHAT_CONFIG_PATH 67 | fi 68 | 69 | if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then 70 | sed -i "8c \"image_create_prefix\": $IMAGE_CREATE_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH 71 | fi 72 | 73 | if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then 74 | sed -i "9c \"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS," $CHATGPT_ON_WECHAT_CONFIG_PATH 75 | fi 76 | 77 | if [ "$CHARACTER_DESC" != "" ] ; then 78 | sed -i "10c \"character_desc\": \"$CHARACTER_DESC\"," $CHATGPT_ON_WECHAT_CONFIG_PATH 79 | fi 80 | 81 | if [ "$EXPIRES_IN_SECONDS" != "" ] ; then 82 | sed -i "11c \"expires_in_seconds\": $EXPIRES_IN_SECONDS" $CHATGPT_ON_WECHAT_CONFIG_PATH 83 | fi 84 | 85 | # go to prefix dir 86 | cd $CHATGPT_ON_WECHAT_PREFIX 87 | # excute 88 | $CHATGPT_ON_WECHAT_EXEC 89 | 90 | 91 | -------------------------------------------------------------------------------- /docker/sample-chatgpt-on-wechat/.env: -------------------------------------------------------------------------------- 1 | OPEN_AI_API_KEY=YOUR API KEY 2 | OPEN_AI_PROXY= 3 | SINGLE_CHAT_PREFIX=["bot", "@bot"] 4 | SINGLE_CHAT_REPLY_PREFIX="[bot] " 5 | GROUP_CHAT_PREFIX=["@bot"] 6 | GROUP_NAME_WHITE_LIST=["ChatGPT测试群", "ChatGPT测试群2"] 7 | IMAGE_CREATE_PREFIX=["画", "看", "找"] 8 | CONVERSATION_MAX_TOKENS=1000 9 | CHARACTER_DESC=你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。 10 | EXPIRES_IN_SECONDS=3600 11 | 12 | # Optional 13 | #CHATGPT_ON_WECHAT_PREFIX=/app 14 | #CHATGPT_ON_WECHAT_CONFIG_PATH=/app/config.json 15 | #CHATGPT_ON_WECHAT_EXEC=python app.py -------------------------------------------------------------------------------- /docker/sample-chatgpt-on-wechat/Makefile: -------------------------------------------------------------------------------- 1 | IMG:=`cat Name` 2 | MOUNT:= 3 | PORT_MAP:= 4 | DOTENV:=.env 5 | CONTAINER_NAME:=sample-chatgpt-on-wechat 6 | 7 | echo: 8 | echo $(IMG) 9 | 10 | run_d: 11 | docker rm $(CONTAINER_NAME) || echo 12 | docker run -dt --name $(CONTAINER_NAME) $(PORT_MAP) \ 13 | --env-file=$(DOTENV) \ 14 | $(MOUNT) $(IMG) 15 | 16 | run_i: 17 | docker rm $(CONTAINER_NAME) || echo 18 | docker run -it --name $(CONTAINER_NAME) $(PORT_MAP) \ 19 | --env-file=$(DOTENV) \ 20 | $(MOUNT) $(IMG) 21 | 22 | stop: 23 | docker stop $(CONTAINER_NAME) 24 | 25 | rm: stop 26 | docker rm $(CONTAINER_NAME) 27 | -------------------------------------------------------------------------------- /docker/sample-chatgpt-on-wechat/Name: -------------------------------------------------------------------------------- 1 | zhayujie/chatgpt-on-wechat 2 | -------------------------------------------------------------------------------- /docs/images/group-chat-sample.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/caesee/chagpt-on-wechat/8fa4041fc2a50013618ac33000a349dcfdebf989/docs/images/group-chat-sample.jpg -------------------------------------------------------------------------------- /docs/images/image-create-sample.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/caesee/chagpt-on-wechat/8fa4041fc2a50013618ac33000a349dcfdebf989/docs/images/image-create-sample.jpg -------------------------------------------------------------------------------- /docs/images/single-chat-sample.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/caesee/chagpt-on-wechat/8fa4041fc2a50013618ac33000a349dcfdebf989/docs/images/single-chat-sample.jpg -------------------------------------------------------------------------------- /requirement.txt: -------------------------------------------------------------------------------- 1 | itchat-uos==1.5.0.dev0 2 | openai 3 | wechaty 4 | -------------------------------------------------------------------------------- /scripts/shutdown.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #关闭服务 4 | cd `dirname $0`/.. 5 | export BASE_DIR=`pwd` 6 | pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'` 7 | if [ -z "$pid" ] ; then 8 | echo "No chatgpt-on-wechat running." 9 | exit -1; 10 | fi 11 | 12 | echo "The chatgpt-on-wechat(${pid}) is running..." 13 | 14 | kill ${pid} 15 | 16 | echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK" 17 | -------------------------------------------------------------------------------- /scripts/start.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #后台运行Chat_on_webchat执行脚本 3 | 4 | cd `dirname $0`/.. 5 | export BASE_DIR=`pwd` 6 | echo $BASE_DIR 7 | 8 | # check the nohup.out log output file 9 | if [ ! -f "${BASE_DIR}/nohup.out" ]; then 10 | touch "${BASE_DIR}/nohup.out" 11 | echo "create file ${BASE_DIR}/nohup.out" 12 | fi 13 | 14 | nohup python3 "${BASE_DIR}/app.py" & tail -f "${BASE_DIR}/nohup.out" 15 | 16 | echo "Chat_on_webchat is starting,you can check the ${BASE_DIR}/nohup.out" 17 | -------------------------------------------------------------------------------- /scripts/tout.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #打开日志 3 | 4 | cd `dirname $0`/.. 5 | export BASE_DIR=`pwd` 6 | echo $BASE_DIR 7 | 8 | # check the nohup.out log output file 9 | if [ ! -f "${BASE_DIR}/nohup.out" ]; then 10 | echo "No file ${BASE_DIR}/nohup.out" 11 | exit -1; 12 | fi 13 | 14 | tail -f "${BASE_DIR}/nohup.out" 15 | -------------------------------------------------------------------------------- /voice/baidu/baidu_voice.py: -------------------------------------------------------------------------------- 1 | 2 | """ 3 | baidu voice service 4 | """ 5 | import time 6 | from aip import AipSpeech 7 | from common.log import logger 8 | from common.tmp_dir import TmpDir 9 | from voice.voice import Voice 10 | from config import conf 11 | 12 | class BaiduVoice(Voice): 13 | APP_ID = conf().get('baidu_app_id') 14 | API_KEY = conf().get('baidu_api_key') 15 | SECRET_KEY = conf().get('baidu_secret_key') 16 | client = AipSpeech(APP_ID, API_KEY, SECRET_KEY) 17 | 18 | def __init__(self): 19 | pass 20 | 21 | def voiceToText(self, voice_file): 22 | pass 23 | 24 | def textToVoice(self, text): 25 | result = self.client.synthesis(text, 'zh', 1, { 26 | 'spd': 5, 'pit': 5, 'vol': 5, 'per': 111 27 | }) 28 | if not isinstance(result, dict): 29 | fileName = TmpDir().path() + '语音回复_' + str(int(time.time())) + '.mp3' 30 | with open(fileName, 'wb') as f: 31 | f.write(result) 32 | logger.info('[Baidu] textToVoice text={} voice file name={}'.format(text, fileName)) 33 | return fileName 34 | else: 35 | logger.error('[Baidu] textToVoice error={}'.format(result)) 36 | return None 37 | -------------------------------------------------------------------------------- /voice/google/google_voice.py: -------------------------------------------------------------------------------- 1 | 2 | """ 3 | google voice service 4 | """ 5 | 6 | import pathlib 7 | import subprocess 8 | import time 9 | import speech_recognition 10 | import pyttsx3 11 | from common.log import logger 12 | from common.tmp_dir import TmpDir 13 | from voice.voice import Voice 14 | 15 | 16 | class GoogleVoice(Voice): 17 | recognizer = speech_recognition.Recognizer() 18 | engine = pyttsx3.init() 19 | 20 | def __init__(self): 21 | # 语速 22 | self.engine.setProperty('rate', 125) 23 | # 音量 24 | self.engine.setProperty('volume', 1.0) 25 | # 0为男声,1为女声 26 | voices = self.engine.getProperty('voices') 27 | self.engine.setProperty('voice', voices[1].id) 28 | 29 | def voiceToText(self, voice_file): 30 | new_file = voice_file.replace('.mp3', '.wav') 31 | subprocess.call('ffmpeg -i ' + voice_file + 32 | ' -acodec pcm_s16le -ac 1 -ar 16000 ' + new_file, shell=True) 33 | with speech_recognition.AudioFile(new_file) as source: 34 | audio = self.recognizer.record(source) 35 | try: 36 | text = self.recognizer.recognize_google(audio, language='zh-CN') 37 | logger.info( 38 | '[Google] voiceToText text={} voice file name={}'.format(text, voice_file)) 39 | return text 40 | except speech_recognition.UnknownValueError: 41 | return "抱歉,我听不懂。" 42 | except speech_recognition.RequestError as e: 43 | return "抱歉,无法连接到 Google 语音识别服务;{0}".format(e) 44 | 45 | def textToVoice(self, text): 46 | textFile = TmpDir().path() + '语音回复_' + str(int(time.time())) + '.mp3' 47 | self.engine.save_to_file(text, textFile) 48 | self.engine.runAndWait() 49 | logger.info( 50 | '[Google] textToVoice text={} voice file name={}'.format(text, textFile)) 51 | return textFile 52 | -------------------------------------------------------------------------------- /voice/openai/openai_voice.py: -------------------------------------------------------------------------------- 1 | 2 | """ 3 | google voice service 4 | """ 5 | import json 6 | import openai 7 | from config import conf 8 | from common.log import logger 9 | from voice.voice import Voice 10 | 11 | 12 | class OpenaiVoice(Voice): 13 | def __init__(self): 14 | openai.api_key = conf().get('open_ai_api_key') 15 | 16 | def voiceToText(self, voice_file): 17 | logger.debug( 18 | '[Openai] voice file name={}'.format(voice_file)) 19 | file = open(voice_file, "rb") 20 | reply = openai.Audio.transcribe("whisper-1", file) 21 | text = reply["text"] 22 | logger.info( 23 | '[Openai] voiceToText text={} voice file name={}'.format(text, voice_file)) 24 | return text 25 | 26 | def textToVoice(self, text): 27 | pass 28 | -------------------------------------------------------------------------------- /voice/voice.py: -------------------------------------------------------------------------------- 1 | """ 2 | Voice service abstract class 3 | """ 4 | 5 | class Voice(object): 6 | def voiceToText(self, voice_file): 7 | """ 8 | Send voice to voice service and get text 9 | """ 10 | raise NotImplementedError 11 | 12 | def textToVoice(self, text): 13 | """ 14 | Send text to voice service and get voice 15 | """ 16 | raise NotImplementedError -------------------------------------------------------------------------------- /voice/voice_factory.py: -------------------------------------------------------------------------------- 1 | """ 2 | voice factory 3 | """ 4 | 5 | def create_voice(voice_type): 6 | """ 7 | create a voice instance 8 | :param voice_type: voice type code 9 | :return: voice instance 10 | """ 11 | if voice_type == 'baidu': 12 | from voice.baidu.baidu_voice import BaiduVoice 13 | return BaiduVoice() 14 | elif voice_type == 'google': 15 | from voice.google.google_voice import GoogleVoice 16 | return GoogleVoice() 17 | elif voice_type == 'openai': 18 | from voice.openai.openai_voice import OpenaiVoice 19 | return OpenaiVoice() 20 | raise RuntimeError 21 | --------------------------------------------------------------------------------