Merge branch 'master' into multi-threads-control

This commit is contained in:
Rock Chin 2023-03-23 21:43:41 +08:00 committed by GitHub
commit 2b8bd45bcd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
39 changed files with 1361 additions and 436 deletions

View File

@ -0,0 +1,34 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "QChatGPT 3.10",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.10",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
"customizations": {
"codespaces": {
"repositories": {
"RockChinQ/QChatGPT": {
"permissions": "write-all"
},
"RockChinQ/revLibs": {
"permissions": "write-all"
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@ -1,6 +1,6 @@
---
name: 漏洞反馈
about: 报错或漏洞请使用这个模板创建
about: 报错或漏洞请使用这个模板创建不使用此模板创建的异常、漏洞相关issue将被直接关闭
title: "[BUG]"
labels: 'bug'
assignees: ''

25
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,25 @@
## 概述
实现/解决/优化的内容:
### 事务
- [ ] 已阅读仓库[贡献指引](../CONTRIBUTING.md)
- [ ] 已与维护者在issues或其他平台沟通此PR大致内容
## 以下内容可在起草PR后、合并PR前逐步完成
### 功能
- [ ] 已编写完善的配置文件字段说明(若有新增)
- [ ] 已编写面向用户的新功能说明(若有必要)
- [ ] 已测试新功能或更改
### 兼容性
- [ ] 已处理版本兼容性
- [ ] 已处理插件兼容问题
### 风险
可能导致或已知的问题:

7
.gitignore vendored
View File

@ -11,4 +11,9 @@ prompts/
logs/
sensitive.json
temp/
current_tag
current_tag
scenario/
!scenario/default-template.json
override.json
cookies.json
res/announcement_saved

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "QChatGPT.wiki"]
path = QChatGPT.wiki
url = https://github.com/RockChinQ/QChatGPT.wiki.git

1
QChatGPT.wiki Submodule

@ -0,0 +1 @@
Subproject commit 68c4ef5d240877a871044e0b340db183453799bf

View File

@ -1,12 +1,13 @@
# QChatGPT🤖
> 2023/3/3 官方接口疑似被墙,可考虑使用网络代理 [#198](https://github.com/RockChinQ/QChatGPT/issues/198)
> 2023/3/18 现已支持GPT-4 API内测请查看`config-template.py`中的`completion_api_params`
> 2023/3/15 逆向库已支持New Bing使用方法查看[插件文档](https://github.com/RockChinQ/revLibs)
> 2023/3/15 逆向库已支持GPT-4模型使用方法查看[插件](https://github.com/RockChinQ/revLibs)
> 2023/3/3 现已在主线支持官方ChatGPT接口使用方法查看[#195](https://github.com/RockChinQ/QChatGPT/issues/195)
> 2023/3/2 OpenAI已发布ChatGPT官方接口我们正在全力接入预计明日前完成请查看[此PR](https://github.com/RockChinQ/QChatGPT/pull/194)
> 2023/2/16 现已支持接入ChatGPT网页版详情请完成部署并查看底部**插件**小节或[此仓库](https://github.com/RockChinQ/revLibs)
- 到[项目Wiki](https://github.com/RockChinQ/QChatGPT/wiki)可了解项目详细信息
- 由bilibili TheLazy制作的[视频教程](https://www.bilibili.com/video/BV15v4y1X7aP)
- 交流、答疑群: ~~204785790~~已满、691226829、656285629
- ~~由bilibili TheLazy制作的[视频教程](https://www.bilibili.com/video/BV15v4y1X7aP)~~(寄了,求大佬做个新的)
- 交流、答疑群: ~~204785790~~(已满)、~~691226829~~(已满)、656285629
- **进群提问前请您`确保`已经找遍文档和issue均无法解决**
- QQ频道机器人见[QQChannelChatGPT](https://github.com/Soulter/QQChannelChatGPT)
@ -14,11 +15,17 @@
## 🍺模型适配一览
<details>
<summary>点击此处展开</summary>
### 文字对话
- OpenAI GPT-3.5模型(ChatGPT API), 本项目原生支持, 默认使用
- OpenAI GPT-3模型, 本项目原生支持, 部署完成后前往config.py切换
- ChatGPT网页版逆向API, 由[插件](https://github.com/RockChinQ/revLibs)接入
- OpenAI GPT-3模型, 本项目原生支持, 部署完成后前往`config.py`切换
- OpenAI GPT-4模型, 本项目原生支持, 目前需要您的账户通过OpenAI的内测申请, 请前往`config.py`切换
- ChatGPT网页版GPT-3.5模型, 由[插件](https://github.com/RockChinQ/revLibs)接入
- ChatGPT网页版GPT-4模型, 目前需要ChatGPT Plus订阅, 由[插件](https://github.com/RockChinQ/revLibs)接入
- New Bing逆向库, 由[插件](https://github.com/RockChinQ/revLibs)接入
### 故事续写
@ -32,6 +39,10 @@
### 语音生成
- TTS+VITS, 由[插件](https://github.com/dominoar/QChatPlugins)接入
- Plachta/VITS-Umamusume-voice-synthesizer, 由[插件](https://github.com/oliverkirk-sudo/chat_voice)接入
</details>
## ✅功能
@ -106,18 +117,26 @@
- “丢弃”策略:此分钟内对话次数达到限制时,丢弃之后的对话
- 详细请查看config.py中的相关配置
</details>
<details>
<summary>✅支持使用网络代理</summary>
- 目前已支持正向代理访问接口
- 详细请查看config.py中的`openai_config`的说明
</details>
详情请查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E5%8A%9F%E8%83%BD%E7%82%B9%E5%88%97%E4%B8%BE)
## 🔩部署
**部署过程中遇到任何问题,请先在[QChatGPT](https://github.com/RockChinQ/QChatGPT/issues)或[qcg-installer](https://github.com/RockChinQ/qcg-installer/issues)的issue里进行搜索**
**部署过程中遇到任何问题,请先在[QChatGPT](https://github.com/RockChinQ/QChatGPT/issues)或[qcg-installer](https://github.com/RockChinQ/qcg-installer/issues)的issue里进行搜索**
### - 注册OpenAI账号
> 若您要直接使用非OpenAI的模型如New Bing可跳过此步骤直接进行之后的部署完成后按照相关插件的文档进行配置即可
参考以下文章自行注册
> [国内注册ChatGPT的方法(100%可用)](https://www.pythonthree.com/register-openai-chatgpt/)
> [国内注册ChatGPT的方法(100%可用)](https://www.pythonthree.com/register-openai-chatgpt/)
> [手把手教你如何注册ChatGPT超级详细](https://guxiaobei.com/51461)
注册成功后请前往[个人中心查看](https://beta.openai.com/account/api-keys)api_key
@ -162,8 +181,7 @@ cd QChatGPT
2. 安装依赖
```bash
pip3 install yiri-mirai openai colorlog func_timeout
pip3 install dulwich
pip3 install yiri-mirai openai colorlog func_timeout dulwich Pillow
```
3. 运行一次主程序,生成配置文件
@ -194,7 +212,8 @@ python3 main.py
## 🚀使用
查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E4%BD%BF%E7%94%A8%E6%96%B9%E5%BC%8F)
**部署完成后必看: [指令说明](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4)**
所有功能查看[Wiki功能使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E4%BD%BF%E7%94%A8%E6%96%B9%E5%BC%8F)
## 🧩插件生态
@ -202,6 +221,9 @@ python3 main.py
详见[Wiki插件使用页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
开发教程见[Wiki插件开发页](https://github.com/RockChinQ/QChatGPT/wiki/%E6%8F%92%E4%BB%B6%E5%BC%80%E5%8F%91)
<details>
<summary>查看插件列表</summary>
### 示例插件
在`tests/plugin_examples`目录下,将其整个目录复制到`plugins`目录下即可使用
@ -216,20 +238,23 @@ python3 main.py
- [revLibs](https://github.com/RockChinQ/revLibs) - 将ChatGPT网页版接入此项目关于[官方接口和网页版有什么区别](https://github.com/RockChinQ/QChatGPT/wiki/%E5%AE%98%E6%96%B9%E6%8E%A5%E5%8F%A3%E4%B8%8EChatGPT%E7%BD%91%E9%A1%B5%E7%89%88)
- [hello_plugin](https://github.com/RockChinQ/hello_plugin) - `hello_plugin` 的储存库形式,插件开发模板
- [dominoar/QChatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件输出、Ranimg、屏蔽词规则等
- [dominoar/QChatPlugins](https://github.com/dominoar/QchatPlugins) - dominoar编写的诸多新功能插件输出、Ranimg、屏蔽词规则等
- [dominoar/QCP-NovelAi](https://github.com/dominoar/QCP-NovelAi) - NovelAI 故事叙述与绘画
- [oliverkirk-sudo/chat_voice](https://github.com/oliverkirk-sudo/chat_voice) - 文字转语音输出使用HuggingFace上的[VITS-Umamusume-voice-synthesizer模型](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer)
- [RockChinQ/WaitYiYan](https://github.com/RockChinQ/WaitYiYan) - 实时获取百度`文心一言`等待列表人数
- [QChartGPT_Emoticon_Plugin](https://github.com/chordfish-k/QChartGPT_Emoticon_Plugin) - 使机器人根据回复内容发送表情包
</details>
## 😘致谢
- [@the-lazy-me](https://github.com/the-lazy-me) 为本项目制作[视频教程](https://www.bilibili.com/video/BV15v4y1X7aP)
- [@mikumifa](https://github.com/mikumifa) 本项目Docker部署仓库开发者
- [@dominoar](https://github.com/dominoar) 为本项目开发多种插件
- [@hissincn](https://github.com/hissincn) 本项目贡献者
- [@LINSTCL](https://github.com/LINSTCL) GPT-3.5官方模型适配贡献者
- [@Haibersut](https://github.com/Haibersut) 本项目贡献者
- [@万神的星空](https://github.com/qq255204159) 整合包发行
- [@ljcduo](https://github.com/ljcduo) GPT-4 API内测账号提供
以及其他所有为本项目提供支持的朋友们。
以及所有[贡献者](https://github.com/RockChinQ/QChatGPT/graphs/contributors)和其他为本项目提供支持的朋友们。
## 👍赞赏
<!-- ## 👍赞赏
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/>
<img alt="赞赏码" src="res/mm_reward_qrcode_1672840549070.png" width="400" height="400"/> -->

View File

@ -79,6 +79,35 @@ default_prompt = {
"default": "如果我之后想获取帮助,请你说“输入!help获取帮助”",
}
# 情景预设格式
# 参考值旧版本方式default | 完整情景full_scenario
# 旧版本的格式为上述default_prompt中的内容或prompts目录下的文件名
#
# 完整情景预设的格式为JSON在scenario目录下的JSON文件中列出对话的每个回合编写方法见scenario/default-template.json
# 编写方法例如:
# {
# "prompt": [
# {
# "role": "user",
# "content": "之后当我需要帮助时,请说“输入!help获取帮助”"
# },{
# "role": "assistant",
# "content": "好的,当你之后需要帮助时,我会说“输入!help获取帮助”"
# },{
# "role": "user",
# "content": "帮助"
# },{
# "role": "assistant",
# "content": "输入!help获取帮助"
# }
# ]
# }
#
# 您可以按照上述格式编写自己的情景预设在prompt中列出对话的每个回合
# role为user或assistant分别表示用户和机器人的回复
# 每个JSON文件是一个情景预设文件名即为情景预设的名称
preset_mode = "default"
# 群内响应规则
# 符合此消息的群内消息即使不包含at机器人也会响应
# 支持消息前缀匹配及正则表达式匹配
@ -133,12 +162,16 @@ encourage_sponsor_at_start = True
# 每次向OpenAI接口发送对话记录上下文的字符数
# 最大不超过(4096 - max_tokens)个字符max_tokens为下方completion_api_params中的max_tokens
# 注意较大的prompt_submit_length会导致OpenAI账户额度消耗更快
prompt_submit_length = 1024
prompt_submit_length = 2048
# OpenAI补全API的参数
# 请在下方填写模型,程序自动选择接口
# 现已支持的模型有:
#
# 'gpt-4'
# 'gpt-4-0314'
# 'gpt-4-32k'
# 'gpt-4-32k-0314'
# 'gpt-3.5-turbo'
# 'gpt-3.5-turbo-0301'
# 'text-davinci-003'
@ -150,10 +183,10 @@ prompt_submit_length = 1024
# 'text-ada-001'
#
# 具体请查看OpenAI的文档: https://beta.openai.com/docs/api-reference/completions/create
# 请将内容修改到config.py中请勿修改config-template.py
completion_api_params = {
"model": "gpt-3.5-turbo",
"temperature": 0.9, # 数值越低得到的回答越理性,取值范围[0, 1]
"max_tokens": 1024, # 每次获取OpenAI接口响应的文字量上限, 不高于4096
"top_p": 1, # 生成的文本的文本与要求的符合度, 取值范围[0, 1]
"frequency_penalty": 0.2,
"presence_penalty": 1.0,
@ -257,11 +290,4 @@ help_message = """此机器人通过调用OpenAI的GPT-3大型语言模型生成
每次会话最后一次交互后{}分钟后会自动结束结束后将开启新会话如需继续前一次会话请发送 !last 重新开启
欢迎到github.com/RockChinQ/QChatGPT 给个star
帮助信息
!help - 显示帮助
!reset - 重置会话
!last - 切换到前一次的对话
!next - 切换到后一次的对话
!prompt - 显示当前对话所有内容
!list - 列出所有历史会话
!usage - 列出各个api-key的使用量""".format(session_expire_time // 60)
指令帮助信息请查看: https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4""".format(session_expire_time // 60)

23
generate_override_all.py Normal file
View File

@ -0,0 +1,23 @@
# 使用config-template生成override.json的字段全集模板文件override-all.json
# 关于override.json机制请参考https://github.com/RockChinQ/QChatGPT/pull/271
import json
import importlib
template = importlib.import_module("config-template")
output_json = {
"comment": "这是override.json支持的字段全集, 关于override.json机制, 请查看https://github.com/RockChinQ/QChatGPT/pull/271"
}
for k, v in template.__dict__.items():
if k.startswith("__"):
continue
# 如果是module
if type(v) == type(template):
continue
print(k, v, type(v))
output_json[k] = v
with open("override-all.json", "w", encoding="utf-8") as f:
json.dump(output_json, f, indent=4, ensure_ascii=False)

42
main.py
View File

@ -1,4 +1,5 @@
import importlib
import json
import os
import shutil
import threading
@ -12,8 +13,8 @@ try:
except ImportError:
# 尝试安装
import pkg.utils.pkgmgr as pkgmgr
pkgmgr.install_requirements("requirements.txt")
try:
pkgmgr.install_requirements("requirements.txt")
import colorlog
except ImportError:
print("依赖不满足,请查看 https://github.com/RockChinQ/qcg-installer/issues/15")
@ -32,7 +33,7 @@ log_colors_config = {
'INFO': 'white',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'bold_red',
'CRITICAL': 'cyan',
}
@ -114,8 +115,21 @@ def load_config():
setattr(config, key, getattr(config_template, key))
logging.warning("[{}]不存在".format(key))
is_integrity = False
if not is_integrity:
logging.warning("配置文件不完整请依据config-template.py检查config.py")
# 检查override.json覆盖
if os.path.exists("override.json"):
override_json = json.load(open("override.json", "r", encoding="utf-8"))
for key in override_json:
if hasattr(config, key):
setattr(config, key, override_json[key])
logging.info("覆写配置[{}]为[{}]".format(key, override_json[key]))
else:
logging.error("无法覆写配置[{}]为[{}]该配置不存在请检查override.json是否正确".format(key, override_json[key]))
if not is_integrity:
logging.warning("以上配置已被设为默认值将在5秒后继续启动... ")
time.sleep(5)
@ -146,7 +160,6 @@ def start(first_time_init=False):
try:
sh = reset_logging()
pkg.utils.context.context['logger_handler'] = sh
# 检查是否设置了管理员
@ -180,6 +193,7 @@ def start(first_time_init=False):
import pkg.openai.dprompt
pkg.openai.dprompt.read_prompt_from_file()
pkg.openai.dprompt.read_scenario_from_file()
# 主启动流程
database = pkg.database.manager.DatabaseManager()
@ -258,6 +272,13 @@ def start(first_time_init=False):
# run_bot_wrapper
# )
finally:
# 判断若是Windows输出选择模式可能会暂停程序的警告
if os.name == 'nt':
time.sleep(2)
logging.info("您正在使用Windows系统若命令行窗口处于“选择”模式程序可能会被暂停此时请右键点击窗口空白区域使其取消选择模式。")
time.sleep(12)
if first_time_init:
if not known_exception_caught:
logging.info('程序启动完成,如长时间未显示 ”成功登录到账号xxxxx“ ,并且不回复消息,请查看 '
@ -289,13 +310,22 @@ def start(first_time_init=False):
import pkg.utils.updater
try:
if pkg.utils.updater.is_new_version_available():
pkg.utils.context.get_qqbot_manager().notify_admin("新版本可用,请发送 !update 进行自动更新\n更新日志:\n{}".format("\n".join(pkg.utils.updater.get_rls_notes())))
logging.info("新版本可用,请发送 !update 进行自动更新\n更新日志:\n{}".format("\n".join(pkg.utils.updater.get_rls_notes())))
else:
logging.info("当前已是最新版本")
except Exception as e:
logging.warning("检查更新失败:{}".format(e))
try:
import pkg.utils.announcement as announcement
new_announcement = announcement.fetch_new()
if new_announcement != "":
logging.critical("[公告] {}".format(new_announcement))
except Exception as e:
logging.warning("获取公告失败:{}".format(e))
return qqbot
def stop():
import pkg.qqbot.manager
@ -331,6 +361,10 @@ def check_file():
if not os.path.exists("sensitive.json"):
shutil.copy("sensitive-template.json", "sensitive.json")
# 检查是否有scenario/default.json
if not os.path.exists("scenario/default.json"):
shutil.copy("scenario/default-template.json", "scenario/default.json")
# 检查temp目录
if not os.path.exists("temp/"):
os.mkdir("temp/")

75
override-all.json Normal file
View File

@ -0,0 +1,75 @@
{
"comment": "这是override.json支持的字段全集, 关于override.json机制, 请查看https://github.com/RockChinQ/QChatGPT/pull/271",
"mirai_http_api_config": {
"adapter": "WebSocketAdapter",
"host": "localhost",
"port": 8080,
"verifyKey": "yirimirai",
"qq": 1234567890
},
"openai_config": {
"api_key": {
"default": "openai_api_key"
},
"http_proxy": null
},
"admin_qq": 0,
"default_prompt": {
"default": "如果我之后想获取帮助,请你说“输入!help获取帮助”"
},
"preset_mode": "default",
"response_rules": {
"at": true,
"prefix": [
"/ai",
"!ai",
"ai",
"ai"
],
"regexp": [],
"random_rate": 0.0
},
"ignore_rules": {
"prefix": [
"/"
],
"regexp": []
},
"income_msg_check": false,
"sensitive_word_filter": true,
"baidu_check": false,
"baidu_api_key": "",
"baidu_secret_key": "",
"inappropriate_message_tips": "[百度云]请珍惜机器人,当前返回内容不合规",
"encourage_sponsor_at_start": true,
"prompt_submit_length": 1024,
"completion_api_params": {
"model": "gpt-3.5-turbo",
"temperature": 0.9,
"top_p": 1,
"frequency_penalty": 0.2,
"presence_penalty": 1.0
},
"image_api_params": {
"size": "256x256"
},
"quote_origin": true,
"include_image_description": true,
"process_message_timeout": 30,
"show_prefix": false,
"blob_message_threshold": 256,
"blob_message_strategy": "forward",
"font_path": "",
"retry_times": 3,
"hide_exce_info_to_user": false,
"alter_tip_message": "出错了,请稍后再试",
"pool_num": 10,
"session_expire_time": 1200,
"rate_limitation": 60,
"rate_limit_strategy": "wait",
"rate_limit_drop_tip": "本分钟对话次数超过限速次数,此对话被丢弃",
"upgrade_dependencies": true,
"report_usage": true,
"logging_level": 20,
"help_message": "此机器人通过调用OpenAI的GPT-3大型语言模型生成回复不具有情感。\n你可以用自然语言与其交流回复的消息中[GPT]开头的为模型生成的语言,[bot]开头的为程序提示。\n了解此项目请找QQ 1010553892 联系作者\n请不要用其生成整篇文章或大段代码因为每次只会向模型提交少部分文字生成大部分文字会产生偏题、前后矛盾等问题\n每次会话最后一次交互后20分钟后会自动结束结束后将开启新会话如需继续前一次会话请发送 !last 重新开启\n欢迎到github.com/RockChinQ/QChatGPT 给个star\n\n指令帮助信息请查看: https://github.com/RockChinQ/QChatGPT/wiki/%E5%8A%9F%E8%83%BD%E4%BD%BF%E7%94%A8#%E6%9C%BA%E5%99%A8%E4%BA%BA%E6%8C%87%E4%BB%A4"
}

View File

@ -46,7 +46,7 @@ class DataGatherer:
config = pkg.utils.context.get_config()
if hasattr(config, "report_usage") and not config.report_usage:
return
res = requests.get("http://rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}".format(subservice_name, self.version_str, count))
res = requests.get("http://reports.rockchin.top:18989/usage?service_name=qchatgpt.{}&version={}&count={}".format(subservice_name, self.version_str, count))
if res.status_code != 200 or res.text != "ok":
logging.warning("report to server failed, status_code: {}, text: {}".format(res.status_code, res.text))
except:

View File

@ -35,6 +35,7 @@ class DatabaseManager:
def __execute__(self, *args, **kwargs) -> Cursor:
# logging.debug('SQL: {}'.format(sql))
logging.debug('SQL: {}'.format(args))
c = self.cursor.execute(*args, **kwargs)
self.conn.commit()
return c
@ -52,10 +53,30 @@ class DatabaseManager:
`create_timestamp` bigint not null,
`last_interact_timestamp` bigint not null,
`status` varchar(255) not null default 'on_going',
`prompt` text not null
`default_prompt` text not null default '',
`prompt` text not null,
`token_counts` text not null default '[]'
)
""")
# 检查sessions表是否存在`default_prompt`字段, 检查是否存在`token_counts`字段
self.__execute__("PRAGMA table_info('sessions')")
columns = self.cursor.fetchall()
has_default_prompt = False
has_token_counts = False
for field in columns:
if field[1] == 'default_prompt':
has_default_prompt = True
if field[1] == 'token_counts':
has_token_counts = True
if has_default_prompt and has_token_counts:
break
if not has_default_prompt:
self.__execute__("alter table `sessions` add column `default_prompt` text not null default ''")
if not has_token_counts:
self.__execute__("alter table `sessions` add column `token_counts` text not null default '[]'")
self.__execute__("""
create table if not exists `account_fee`(
`id` INTEGER PRIMARY KEY AUTOINCREMENT,
@ -75,7 +96,7 @@ class DatabaseManager:
# session持久化
def persistence_session(self, subject_type: str, subject_number: int, create_timestamp: int,
last_interact_timestamp: int, prompt: str):
last_interact_timestamp: int, prompt: str, default_prompt: str = '', token_counts: str = ''):
"""持久化指定session"""
# 检查是否已经有了此name和create_timestamp的session
@ -88,20 +109,20 @@ class DatabaseManager:
if count == 0:
sql = """
insert into `sessions` (`name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`)
values (?, ?, ?, ?, ?, ?)
insert into `sessions` (`name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `default_prompt`, `token_counts`)
values (?, ?, ?, ?, ?, ?, ?, ?)
"""
self.__execute__(sql,
("{}_{}".format(subject_type, subject_number), subject_type, subject_number, create_timestamp,
last_interact_timestamp, prompt))
last_interact_timestamp, prompt, default_prompt, token_counts))
else:
sql = """
update `sessions` set `last_interact_timestamp` = ?, `prompt` = ?
update `sessions` set `last_interact_timestamp` = ?, `prompt` = ?, `token_counts` = ?
where `type` = ? and `number` = ? and `create_timestamp` = ?
"""
self.__execute__(sql, (last_interact_timestamp, prompt, subject_type,
self.__execute__(sql, (last_interact_timestamp, prompt, token_counts, subject_type,
subject_number, create_timestamp))
# 显式关闭一个session
@ -126,7 +147,7 @@ class DatabaseManager:
# 从数据库中加载所有还没过期的session
config = pkg.utils.context.get_config()
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `last_interact_timestamp` > {}
""".format(int(time.time()) - config.session_expire_time))
results = self.cursor.fetchall()
@ -139,6 +160,8 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
# 当且仅当最后一个该对象的会话是on_going状态时才会被加载
if status == 'on_going':
@ -147,7 +170,9 @@ class DatabaseManager:
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
}
else:
if session_name in sessions:
@ -159,7 +184,7 @@ class DatabaseManager:
def last_session(self, session_name: str, cursor_timestamp: int):
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `name` = '{}' and `last_interact_timestamp` < {} order by `last_interact_timestamp` desc
limit 1
""".format(session_name, cursor_timestamp))
@ -175,20 +200,24 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
return {
'subject_type': subject_type,
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
}
# 获取此session_name后一个session的数据
def next_session(self, session_name: str, cursor_timestamp: int):
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `name` = '{}' and `last_interact_timestamp` > {} order by `last_interact_timestamp` asc
limit 1
""".format(session_name, cursor_timestamp))
@ -204,19 +233,23 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
return {
'subject_type': subject_type,
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
}
# 列出与某个对象的所有对话session
def list_history(self, session_name: str, capacity: int, page: int):
self.__execute__("""
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`
select `name`, `type`, `number`, `create_timestamp`, `last_interact_timestamp`, `prompt`, `status`, `default_prompt`, `token_counts`
from `sessions` where `name` = '{}' order by `last_interact_timestamp` desc limit {} offset {}
""".format(session_name, capacity, capacity * page))
results = self.cursor.fetchall()
@ -229,17 +262,42 @@ class DatabaseManager:
last_interact_timestamp = result[4]
prompt = result[5]
status = result[6]
default_prompt = result[7]
token_counts = result[8]
sessions.append({
'subject_type': subject_type,
'subject_number': subject_number,
'create_timestamp': create_timestamp,
'last_interact_timestamp': last_interact_timestamp,
'prompt': prompt
'prompt': prompt,
'default_prompt': default_prompt,
'token_counts': token_counts
})
return sessions
def delete_history(self, session_name: str, index: int) -> bool:
# 删除倒序第index个session
# 查找其id再删除
self.__execute__("""
delete from `sessions` where `id` in (select `id` from `sessions` where `name` = '{}' order by `last_interact_timestamp` desc limit 1 offset {})
""".format(session_name, index))
return self.cursor.rowcount == 1
def delete_all_history(self, session_name: str) -> bool:
self.__execute__("""
delete from `sessions` where `name` = '{}'
""".format(session_name))
return self.cursor.rowcount > 0
def delete_all_session_history(self) -> bool:
self.__execute__("""
delete from `sessions`
""")
return self.cursor.rowcount > 0
# 将apikey的使用量存进数据库
def dump_api_key_usage(self, api_keys: dict, usage: dict):
logging.debug('dumping api key usage...')

View File

@ -1,4 +1,6 @@
# 多情景预设值管理
import json
import logging
__current__ = "default"
"""当前默认使用的情景预设的名称
@ -9,8 +11,10 @@ __current__ = "default"
__prompts_from_files__ = {}
"""从文件中读取的情景预设值"""
__scenario_from_files__ = {}
def read_prompt_from_file() -> str:
def read_prompt_from_file():
"""从文件读取预设值"""
# 读取prompts/目录下的所有文件,以文件名为键,文件内容为值
# 保存在__prompts_from_files__中
@ -23,6 +27,19 @@ def read_prompt_from_file() -> str:
__prompts_from_files__[file] = f.read()
def read_scenario_from_file():
"""从JSON文件读取情景预设"""
global __scenario_from_files__
import os
__scenario_from_files__ = {}
for file in os.listdir("scenario"):
if file == "default-template.json":
continue
with open(os.path.join("scenario", file), encoding="utf-8") as f:
__scenario_from_files__[file] = json.load(f)
def get_prompt_dict() -> dict:
"""获取预设值字典"""
import config
@ -65,15 +82,40 @@ def set_to_default():
__current__ = list(default_dict.keys())[0]
def get_prompt(name: str = None) -> str:
def get_prompt(name: str = None) -> list:
global __scenario_from_files__
import config
preset_mode = config.preset_mode
"""获取预设值"""
if name is None:
name = get_current()
default_dict = get_prompt_dict()
# JSON预设方式
if preset_mode == 'full_scenario':
import os
for key in default_dict:
if key.lower().startswith(name.lower()):
return default_dict[key]
for key in __scenario_from_files__:
if key.lower().startswith(name.lower()):
logging.debug('成功加载情景预设从JSON文件: {}'.format(key))
return __scenario_from_files__[key]['prompt']
# 默认预设方式
elif preset_mode == 'default':
raise KeyError("未找到情景预设: " + name)
default_dict = get_prompt_dict()
for key in default_dict:
if key.lower().startswith(name.lower()):
return [
{
"role": "user",
"content": default_dict[key]
},
{
"role": "assistant",
"content": "好的。"
}
]
raise KeyError("未找到默认情景预设: " + name)

View File

@ -88,4 +88,4 @@ class KeysManager:
for key_name in self.api_key:
if self.api_key[key_name] == api_key:
return key_name
return ""
return ""

View File

@ -34,7 +34,7 @@ class OpenAIInteract:
pkg.utils.context.set_openai_manager(self)
# 请求OpenAI Completion
def request_completion(self, prompts) -> str:
def request_completion(self, prompts) -> tuple[str, int]:
"""请求补全接口回复
Parameters:
@ -60,14 +60,18 @@ class OpenAIInteract:
logging.debug("OpenAI response: %s", response)
# 记录使用量
current_round_token = 0
if 'model' in config.completion_api_params:
self.audit_mgr.report_text_model_usage(config.completion_api_params['model'],
ai.get_total_tokens())
current_round_token = ai.get_total_tokens()
elif 'engine' in config.completion_api_params:
self.audit_mgr.report_text_model_usage(config.completion_api_params['engine'],
response['usage']['total_tokens'])
current_round_token = response['usage']['total_tokens']
return ai.get_message()
return ai.get_message(), current_round_token
def request_image(self, prompt) -> dict:
"""请求图片接口回复

View File

@ -21,6 +21,10 @@ COMPLETION_MODELS = {
CHAT_COMPLETION_MODELS = {
'gpt-3.5-turbo',
'gpt-3.5-turbo-0301',
'gpt-4',
'gpt-4-0314',
'gpt-4-32k',
'gpt-4-32k-0314'
}
EDIT_MODELS = {

View File

@ -40,7 +40,7 @@ def reset_session_prompt(session_name, prompt):
prompt = [
{
'role': 'system',
'content': config.default_prompt['default']
'content': config.default_prompt['default'] if type(config.default_prompt) == dict else config.default_prompt
}
]
# 警告
@ -72,9 +72,12 @@ def load_sessions():
temp_session.last_interact_timestamp = session_data[session_name]['last_interact_timestamp']
try:
temp_session.prompt = json.loads(session_data[session_name]['prompt'])
temp_session.token_counts = json.loads(session_data[session_name]['token_counts'])
except Exception:
temp_session.prompt = reset_session_prompt(session_name, session_data[session_name]['prompt'])
temp_session.persistence()
temp_session.default_prompt = json.loads(session_data[session_name]['default_prompt']) if \
session_data[session_name]['default_prompt'] else []
sessions[session_name] = temp_session
@ -104,6 +107,12 @@ class Session:
prompt = []
"""使用list来保存会话中的回合"""
token_counts = []
"""每个回合的token数量"""
default_prompt = []
"""本session的默认prompt"""
create_timestamp = 0
"""会话创建时间"""
@ -129,33 +138,26 @@ class Session:
# 从配置文件获取会话预设信息
def get_default_prompt(self, use_default: str = None):
config = pkg.utils.context.get_config()
import pkg.openai.dprompt as dprompt
if use_default is None:
current_default_prompt = dprompt.get_prompt(dprompt.get_current())
else:
current_default_prompt = dprompt.get_prompt(use_default)
use_default = dprompt.get_current()
return [
{
'role': 'user',
'content': current_default_prompt
}, {
'role': 'assistant',
'content': 'ok'
}
]
current_default_prompt = dprompt.get_prompt(use_default)
return current_default_prompt
def __init__(self, name: str):
self.name = name
self.create_timestamp = int(time.time())
self.last_interact_timestamp = int(time.time())
self.prompt = []
self.token_counts = []
self.schedule()
self.response_lock = threading.Lock()
self.prompt = self.get_default_prompt()
self.default_prompt = self.get_default_prompt()
logging.debug("prompt is: {}".format(self.default_prompt))
# 设定检查session最后一次对话是否超过过期时间的计时器
def schedule(self):
@ -199,11 +201,11 @@ class Session:
self.last_interact_timestamp = int(time.time())
# 触发插件事件
if self.prompt == self.get_default_prompt():
if not self.prompt:
args = {
'session_name': self.name,
'session': self,
'default_prompt': self.prompt,
'default_prompt': self.default_prompt,
}
event = pkg.plugin.host.emit(plugin_models.SessionFirstMessageReceived, **args)
@ -213,9 +215,16 @@ class Session:
config = pkg.utils.context.get_config()
max_length = config.prompt_submit_length if hasattr(config, "prompt_submit_length") else 1024
prompts, counts = self.cut_out(text, max_length)
# 计算请求前的prompt数量
total_token_before_query = 0
for token_count in counts:
total_token_before_query += token_count
# 向API请求补全
message = pkg.utils.context.get_openai_manager().request_completion(
self.cut_out(text, max_length),
message, total_token = pkg.utils.context.get_openai_manager().request_completion(
prompts,
)
# 成功获取,处理回复
@ -232,6 +241,10 @@ class Session:
self.prompt.append({'role': 'user', 'content': text})
self.prompt.append({'role': 'assistant', 'content': res_ans})
# 向token_counts中添加本回合的token数量
self.token_counts.append(total_token-total_token_before_query)
logging.debug("本回合使用token: {}, session counts: {}".format(total_token-total_token_before_query, self.token_counts))
if self.just_switched_to_exist_session:
self.just_switched_to_exist_session = False
self.set_ongoing()
@ -248,35 +261,61 @@ class Session:
question = self.prompt[-2]['content']
self.prompt = self.prompt[:-2]
self.token_counts = self.token_counts[:-1]
# 返回上一回合的问题
return question
# 构建对话体
def cut_out(self, msg: str, max_tokens: int) -> list:
"""将现有prompt进行切割处理使得新的prompt长度不超过max_tokens"""
# 如果用户消息长度超过max_tokens直接返回
def cut_out(self, msg: str, max_tokens: int) -> tuple[list, list]:
"""将现有prompt进行切割处理使得新的prompt长度不超过max_tokens
temp_prompt = [
:return: (新的prompt, 新的token_counts)
"""
# 最终由三个部分组成
# - default_prompt 情景预设固定值
# - changable_prompts 可变部分, 此会话中的历史对话回合
# - current_question 当前问题
# 包装目前的对话回合内容
changable_prompts = []
changable_counts = []
# 倒着来, 遍历prompt的步长为2, 遍历tokens_counts的步长为1
changable_index = len(self.prompt) - 1
token_count_index = len(self.token_counts) - 1
packed_tokens = 0
while changable_index >= 0 and token_count_index >= 0:
if packed_tokens + self.token_counts[token_count_index] > max_tokens:
break
changable_prompts.insert(0, self.prompt[changable_index])
changable_prompts.insert(0, self.prompt[changable_index - 1])
changable_counts.insert(0, self.token_counts[token_count_index])
packed_tokens += self.token_counts[token_count_index]
changable_index -= 2
token_count_index -= 1
# 将default_prompt和changable_prompts合并
result_prompt = self.default_prompt + changable_prompts
# 添加当前问题
result_prompt.append(
{
'role': 'user',
'content': msg
}
]
)
token_count = len(msg)
# 倒序遍历prompt
for i in range(len(self.prompt) - 1, -1, -1):
if token_count >= max_tokens:
break
logging.debug('cut_out: {}\nchangable section tokens: {}\npacked counts: {}\nsession counts: {}'.format(json.dumps(result_prompt, ensure_ascii=False, indent=4),
packed_tokens,
changable_counts,
self.token_counts))
# 将prompt加到temp_prompt头部
temp_prompt.insert(0, self.prompt[i])
token_count += len(self.prompt[i]['content'])
logging.debug('cut_out: {}'.format(str(temp_prompt)))
return temp_prompt
return result_prompt, changable_counts
# 持久化session
def persistence(self):
@ -291,11 +330,11 @@ class Session:
subject_number = int(name_spt[1])
db_inst.persistence_session(subject_type, subject_number, self.create_timestamp, self.last_interact_timestamp,
json.dumps(self.prompt))
json.dumps(self.prompt), json.dumps(self.default_prompt), json.dumps(self.token_counts))
# 重置session
def reset(self, explicit: bool = False, expired: bool = False, schedule_new: bool = True, use_prompt: str = None):
if self.prompt[-1]['role'] != "system":
if self.prompt:
self.persistence()
if explicit:
# 触发插件事件
@ -311,7 +350,10 @@ class Session:
if expired:
pkg.utils.context.get_database_manager().set_session_expired(self.name, self.create_timestamp)
self.prompt = self.get_default_prompt(use_prompt)
self.default_prompt = self.get_default_prompt(use_prompt)
self.prompt = []
self.token_counts = []
self.create_timestamp = int(time.time())
self.last_interact_timestamp = int(time.time())
self.just_switched_to_exist_session = False
@ -337,9 +379,11 @@ class Session:
self.last_interact_timestamp = last_one['last_interact_timestamp']
try:
self.prompt = json.loads(last_one['prompt'])
self.token_counts = json.loads(last_one['token_counts'])
except json.decoder.JSONDecodeError:
self.prompt = reset_session_prompt(self.name, last_one['prompt'])
self.persistence()
self.default_prompt = json.loads(last_one['default_prompt']) if last_one['default_prompt'] else []
self.just_switched_to_exist_session = True
return self
@ -356,9 +400,11 @@ class Session:
self.last_interact_timestamp = next_one['last_interact_timestamp']
try:
self.prompt = json.loads(next_one['prompt'])
self.token_counts = json.loads(next_one['token_counts'])
except json.decoder.JSONDecodeError:
self.prompt = reset_session_prompt(self.name, next_one['prompt'])
self.persistence()
self.default_prompt = json.loads(next_one['default_prompt']) if next_one['default_prompt'] else []
self.just_switched_to_exist_session = True
return self
@ -366,5 +412,11 @@ class Session:
def list_history(self, capacity: int = 10, page: int = 0):
return pkg.utils.context.get_database_manager().list_history(self.name, capacity, page)
def delete_history(self, index: int) -> bool:
return pkg.utils.context.get_database_manager().delete_history(self.name, index)
def delete_all_history(self) -> bool:
return pkg.utils.context.get_database_manager().delete_all_history(self.name)
def draw_image(self, prompt: str):
return pkg.utils.context.get_openai_manager().request_image(prompt)

View File

@ -5,6 +5,7 @@ import importlib
import os
import pkgutil
import sys
import shutil
import traceback
import pkg.utils.context as context
@ -160,6 +161,22 @@ def install_plugin(repo_url: str):
main.reset_logging()
def uninstall_plugin(plugin_name: str) -> str:
""" 卸载插件 """
if plugin_name not in __plugins__:
raise Exception("插件不存在")
# 获取文件夹路径
plugin_path = __plugins__[plugin_name]['path'].replace("\\", "/")
# 剪切路径为plugins/插件名
plugin_path = plugin_path.split("plugins/")[1].split("/")[0]
# 删除文件夹
shutil.rmtree("plugins/"+plugin_path)
return "plugins/"+plugin_path
class EventContext:
""" 事件上下文 """
eid = 0

View File

@ -1,5 +1,4 @@
# 长消息处理相关
import logging
import os
import time
import base64
@ -67,7 +66,7 @@ def check_text(text: str) -> list:
"""检查文本是否为长消息,并转换成该使用的消息链组件"""
if not hasattr(config, 'blob_message_threshold'):
return [text]
if len(text) > config.blob_message_threshold:
if not hasattr(config, 'blob_message_strategy'):
raise AttributeError('未定义长消息处理策略')
@ -77,8 +76,6 @@ def check_text(text: str) -> list:
# 转换成图片
return [text_to_image(text)]
elif config.blob_message_strategy == 'forward':
# 敏感词屏蔽
text = context.get_qqbot_manager().reply_filter.process(text)
# 包装转发消息
display = ForwardMessageDiaplay(

View File

36
pkg/qqbot/cmds/func.py Normal file
View File

@ -0,0 +1,36 @@
from pkg.qqbot.cmds.model import command
import logging
from mirai import Image
import config
import pkg.openai.session
@command(
"draw",
"使用DALL·E模型作画",
"!draw <图片提示语>",
[],
False
)
def cmd_draw(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""使用DALL·E模型作画"""
reply = []
if len(params) == 0:
reply = ["[bot]err:请输入图片描述文字"]
else:
session = pkg.openai.session.get_session(session_name)
res = session.draw_image(" ".join(params))
logging.debug("draw_image result:{}".format(res))
reply = [Image(url=res['data'][0]['url'])]
if not (hasattr(config, 'include_image_description')
and not config.include_image_description):
reply.append(" ".join(params))
return reply

45
pkg/qqbot/cmds/model.py Normal file
View File

@ -0,0 +1,45 @@
# 指令模型
import logging
commands = []
"""已注册的指令类
{
"name": "指令名",
"description": "指令描述",
"usage": "指令用法",
"aliases": ["别名1", "别名2"],
"admin_only": "是否仅管理员可用",
"func": "指令执行函数"
}
"""
def command(name: str, description: str, usage: str, aliases: list = None, admin_only: bool = False):
"""指令装饰器"""
def wrapper(fun):
commands.append({
"name": name,
"description": description,
"usage": usage,
"aliases": aliases,
"admin_only": admin_only,
"func": fun
})
return fun
return wrapper
def search(cmd: str) -> dict:
"""查找指令"""
for command in commands:
if (command["name"] == cmd) or (cmd in command["aliases"]):
return command
return None
import pkg.qqbot.cmds.func
import pkg.qqbot.cmds.system
import pkg.qqbot.cmds.session
import pkg.qqbot.cmds.plugin

129
pkg/qqbot/cmds/plugin.py Normal file
View File

@ -0,0 +1,129 @@
from pkg.qqbot.cmds.model import command
import pkg.utils.context
import pkg.plugin.switch as plugin_switch
import os
import threading
import logging
def plugin_operation(cmd, params, is_admin):
reply = []
import pkg.plugin.host as plugin_host
import pkg.utils.updater as updater
plugin_list = plugin_host.__plugins__
if len(params) == 0:
reply_str = "[bot]所有插件({}):\n".format(len(plugin_host.__plugins__))
idx = 0
for key in plugin_host.iter_plugins_name():
plugin = plugin_list[key]
reply_str += "\n#{} {} {}\n{}\nv{}\n作者: {}\n"\
.format((idx+1), plugin['name'],
"[已禁用]" if not plugin['enabled'] else "",
plugin['description'],
plugin['version'], plugin['author'])
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
remote_url = updater.get_remote_url("/".join(plugin['path'].split('/')[:-1]))
if remote_url != "https://github.com/RockChinQ/QChatGPT" and remote_url != "https://gitee.com/RockChin/QChatGPT":
reply_str += "源码: "+remote_url+"\n"
idx += 1
reply = [reply_str]
elif params[0] == 'update':
# 更新所有插件
if is_admin:
def closure():
import pkg.utils.context
updated = []
for key in plugin_list:
plugin = plugin_list[key]
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
success = updater.pull_latest("/".join(plugin['path'].split('/')[:-1]))
if success:
updated.append(plugin['name'])
# 检查是否有requirements.txt
pkg.utils.context.get_qqbot_manager().notify_admin("正在安装依赖...")
for key in plugin_list:
plugin = plugin_list[key]
if os.path.exists("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt"):
logging.info("{}检测到requirements.txt安装依赖".format(plugin['name']))
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt")
import main
main.reset_logging()
pkg.utils.context.get_qqbot_manager().notify_admin("已更新插件: {}".format(", ".join(updated)))
threading.Thread(target=closure).start()
reply = ["[bot]正在更新所有插件,请勿重复发起..."]
else:
reply = ["[bot]err:权限不足"]
elif params[0] == 'del' or params[0] == 'delete':
if is_admin:
if len(params) < 2:
reply = ["[bot]err:未指定插件名"]
else:
plugin_name = params[1]
if plugin_name in plugin_list:
unin_path = plugin_host.uninstall_plugin(plugin_name)
reply = ["[bot]已删除插件: {} ({}), 请发送 !reload 重载插件".format(plugin_name, unin_path)]
else:
reply = ["[bot]err:未找到插件: {}, 请使用!plugin指令查看插件列表".format(plugin_name)]
else:
reply = ["[bot]err:权限不足,请使用管理员账号私聊发起"]
elif params[0] == 'on' or params[0] == 'off' :
new_status = params[0] == 'on'
if is_admin:
if len(params) < 2:
reply = ["[bot]err:未指定插件名"]
else:
plugin_name = params[1]
if plugin_name in plugin_list:
plugin_list[plugin_name]['enabled'] = new_status
plugin_switch.dump_switch()
reply = ["[bot]已{}插件: {}".format("启用" if new_status else "禁用", plugin_name)]
else:
reply = ["[bot]err:未找到插件: {}, 请使用!plugin指令查看插件列表".format(plugin_name)]
else:
reply = ["[bot]err:权限不足,请使用管理员账号私聊发起"]
elif params[0].startswith("http"):
if is_admin:
def closure():
try:
plugin_host.install_plugin(params[0])
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装成功,请发送 !reload 指令重载插件")
except Exception as e:
logging.error("插件安装失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装失败:{}".format(e))
threading.Thread(target=closure, args=()).start()
reply = ["[bot]正在安装插件..."]
else:
reply = ["[bot]err:权限不足,请使用管理员账号私聊发起"]
else:
reply = ["[bot]err:未知参数: {}".format(params)]
return reply
@command(
"plugin",
"插件相关操作",
"!plugin\n!plugin <插件仓库地址>\!plugin update\n!plugin del <插件名>\n!plugin on <插件名>\n!plugin off <插件名>",
[],
False
)
def cmd_plugin(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""插件相关操作"""
reply = plugin_operation(cmd, params, is_admin)
return reply

282
pkg/qqbot/cmds/session.py Normal file
View File

@ -0,0 +1,282 @@
# 会话管理相关指令
import datetime
import json
from pkg.qqbot.cmds.model import command
import pkg.openai.session
import pkg.utils.context
import config
@command(
"reset",
"重置当前会话",
"!reset\n!reset [使用情景预设名称]",
[],
False
)
def cmd_reset(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""重置会话"""
reply = []
if len(params) == 0:
pkg.openai.session.get_session(session_name).reset(explicit=True)
reply = ["[bot]会话已重置"]
else:
pkg.openai.session.get_session(session_name).reset(explicit=True, use_prompt=params[0])
reply = ["[bot]会话已重置,使用场景预设:{}".format(params[0])]
return reply
@command(
"last",
"切换到前一次会话",
"!last",
[],
False
)
def cmd_last(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""切换到前一次会话"""
reply = []
result = pkg.openai.session.get_session(session_name).last_session()
if result is None:
reply = ["[bot]没有前一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到前一次的对话:\n创建时间:{}\n".format(datetime_str)]
return reply
@command(
"next",
"切换到后一次会话",
"!next",
[],
False
)
def cmd_next(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: int, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""切换到后一次会话"""
reply = []
result = pkg.openai.session.get_session(session_name).next_session()
if result is None:
reply = ["[bot]没有后一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到后一次的对话:\n创建时间:{}\n".format(datetime_str)]
return reply
@command(
"prompt",
"获取当前会话的前文",
"!prompt",
[],
False
)
def cmd_prompt(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""获取当前会话的前文"""
reply = []
msgs = ""
session:list = pkg.openai.session.get_session(session_name).prompt
for msg in session:
if len(params) != 0 and params[0] in ['-all', '-a']:
msgs = msgs + "{}: {}\n\n".format(msg['role'], msg['content'])
elif len(msg['content']) > 30:
msgs = msgs + "[{}]: {}...\n\n".format(msg['role'], msg['content'][:30])
else:
msgs = msgs + "[{}]: {}\n\n".format(msg['role'], msg['content'])
reply = ["[bot]当前对话所有内容:\n{}".format(msgs)]
return reply
@command(
"list",
"列出当前会话的所有历史记录",
"!list\n!list [页数]",
[],
False
)
def cmd_list(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""列出当前会话的所有历史记录"""
reply = []
pkg.openai.session.get_session(session_name).persistence()
page = 0
if len(params) > 0:
try:
page = int(params[0])
except ValueError:
pass
results = pkg.openai.session.get_session(session_name).list_history(page=page)
if len(results) == 0:
reply = ["[bot]第{}页没有历史会话".format(page)]
else:
reply_str = "[bot]历史会话 第{}页:\n".format(page)
current = -1
for i in range(len(results)):
# 时间(使用create_timestamp转换) 序号 部分内容
datetime_obj = datetime.datetime.fromtimestamp(results[i]['create_timestamp'])
msg = ""
try:
msg = json.loads(results[i]['prompt'])
except json.decoder.JSONDecodeError:
msg = pkg.openai.session.reset_session_prompt(session_name, results[i]['prompt'])
# 持久化
pkg.openai.session.get_session(session_name).persistence()
if len(msg) >= 2:
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),
msg[0]['content'])
else:
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),
"无内容")
if results[i]['create_timestamp'] == pkg.openai.session.get_session(
session_name).create_timestamp:
current = i + page * 10
reply_str += "\n以上信息倒序排列"
if current != -1:
reply_str += ",当前会话是 #{}\n".format(current)
else:
reply_str += ",当前处于全新会话或不在此页"
reply = [reply_str]
return reply
@command(
"resend",
"重新获取上一次问题的回复",
"!resend",
[],
False
)
def cmd_resend(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""重新获取上一次问题的回复"""
reply = []
session = pkg.openai.session.get_session(session_name)
to_send = session.undo()
mgr = pkg.utils.context.get_qqbot_manager()
reply = pkg.qqbot.message.process_normal_message(to_send, mgr, config,
launcher_type, launcher_id, sender_id)
return reply
@command(
"del",
"删除当前会话的历史记录",
"!del <序号>\n!del all",
[],
False
)
def cmd_del(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""删除当前会话的历史记录"""
reply = []
if len(params) == 0:
reply = ["[bot]参数不足, 格式: !del <序号>\n可以通过!list查看序号"]
else:
if params[0] == 'all':
pkg.openai.session.get_session(session_name).delete_all_history()
reply = ["[bot]已删除所有历史会话"]
elif params[0].isdigit():
if pkg.openai.session.get_session(session_name).delete_history(int(params[0])):
reply = ["[bot]已删除历史会话 #{}".format(params[0])]
else:
reply = ["[bot]没有历史会话 #{}".format(params[0])]
else:
reply = ["[bot]参数错误, 格式: !del <序号>\n可以通过!list查看序号"]
return reply
@command(
"default",
"操作情景预设",
"!default\n!default [指定情景预设为默认]",
[],
False
)
def cmd_default(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""操作情景预设"""
reply = []
if len(params) == 0:
# 输出目前所有情景预设
import pkg.openai.dprompt as dprompt
reply_str = "[bot]当前所有情景预设:\n\n"
for key,value in dprompt.get_prompt_dict().items():
reply_str += " - {}: {}\n".format(key,value)
reply_str += "\n当前默认情景预设:{}\n".format(dprompt.get_current())
reply_str += "请使用!default <情景预设>来设置默认情景预设"
reply = [reply_str]
elif len(params) >0 and is_admin:
# 设置默认情景
import pkg.openai.dprompt as dprompt
try:
dprompt.set_current(params[0])
reply = ["[bot]已设置默认情景预设为:{}".format(dprompt.get_current())]
except KeyError:
reply = ["[bot]err: 未找到情景预设:{}".format(params[0])]
else:
reply = ["[bot]err: 仅管理员可设置默认情景预设"]
return reply
@command(
"delhst",
"删除指定会话的所有历史记录",
"!delhst <会话名称>\n!delhst all",
[],
True
)
def cmd_delhst(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""删除指定会话的所有历史记录"""
reply = []
if len(params) == 0:
reply = ["[bot]err:请输入要删除的会话名: group_<群号> 或者 person_<QQ号>, 或使用 !delhst all 删除所有会话的历史记录"]
else:
if params[0] == "all":
pkg.utils.context.get_database_manager().delete_all_session_history()
reply = ["[bot]已删除所有会话的历史记录"]
else:
if pkg.utils.context.get_database_manager().delete_all_history(params[0]):
reply = ["[bot]已删除会话 {} 的所有历史记录".format(params[0])]
else:
reply = ["[bot]未找到会话 {} 的历史记录".format(params[0])]
return reply

216
pkg/qqbot/cmds/system.py Normal file
View File

@ -0,0 +1,216 @@
from pkg.qqbot.cmds.model import command
import pkg.utils.context
import pkg.utils.updater
import pkg.utils.credit as credit
import config
import logging
import os
import threading
import traceback
import json
@command(
"help",
"获取帮助信息",
"!help",
[],
False
)
def cmd_help(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""获取帮助信息"""
return ["[bot]" + config.help_message]
@command(
"usage",
"获取使用情况",
"!usage",
[],
False
)
def cmd_usage(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""获取使用情况"""
reply = []
reply_str = "[bot]各api-key使用情况:\n\n"
api_keys = pkg.utils.context.get_openai_manager().key_mgr.api_key
for key_name in api_keys:
text_length = pkg.utils.context.get_openai_manager().audit_mgr \
.get_text_length_of_key(api_keys[key_name])
image_count = pkg.utils.context.get_openai_manager().audit_mgr \
.get_image_count_of_key(api_keys[key_name])
reply_str += "{}:\n - 文本长度:{}\n - 图片数量:{}\n".format(key_name, int(text_length),
int(image_count))
# 获取此key的额度
try:
http_proxy = config.openai_config["http_proxy"] if "http_proxy" in config.openai_config else None
credit_data = credit.fetch_credit_data(api_keys[key_name], http_proxy)
reply_str += " - 使用额度:{:.2f}/{:.2f}\n".format(credit_data['total_used'],credit_data['total_granted'])
except Exception as e:
logging.warning("获取额度失败:{}".format(e))
reply = [reply_str]
return reply
@command(
"version",
"查看版本信息",
"!version",
[],
False
)
def cmd_version(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""查看版本信息"""
reply = []
reply_str = "[bot]当前版本:\n{}\n".format(pkg.utils.updater.get_current_version_info())
try:
if pkg.utils.updater.is_new_version_available():
reply_str += "\n有新版本可用,请使用命令 !update 进行更新"
except:
pass
reply = [reply_str]
return reply
@command(
"reload",
"执行热重载",
"!reload",
[],
True
)
def cmd_reload(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""执行热重载"""
import pkg.utils.reloader
def reload_task():
pkg.utils.reloader.reload_all()
threading.Thread(target=reload_task, daemon=True).start()
@command(
"update",
"更新程序",
"!update",
[],
True
)
def cmd_update(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""更新程序"""
reply = []
import pkg.utils.updater
import pkg.utils.reloader
import pkg.utils.context
def update_task():
try:
if pkg.utils.updater.update_all():
pkg.utils.reloader.reload_all(notify=False)
pkg.utils.context.get_qqbot_manager().notify_admin("更新完成")
else:
pkg.utils.context.get_qqbot_manager().notify_admin("无新版本")
except Exception as e0:
traceback.print_exc()
pkg.utils.context.get_qqbot_manager().notify_admin("更新失败:{}".format(e0))
return
threading.Thread(target=update_task, daemon=True).start()
reply = ["[bot]正在更新,请耐心等待,请勿重复发起更新..."]
def config_operation(cmd, params):
reply = []
config = pkg.utils.context.get_config()
reply_str = ""
if len(params) == 0:
reply = ["[bot]err:请输入配置项"]
else:
cfg_name = params[0]
if cfg_name == 'all':
reply_str = "[bot]所有配置项:\n\n"
for cfg in dir(config):
if not cfg.startswith('__') and not cfg == 'logging':
# 根据配置项类型进行格式化如果是字典则转换为json并格式化
if isinstance(getattr(config, cfg), str):
reply_str += "{}: \"{}\"\n".format(cfg, getattr(config, cfg))
elif isinstance(getattr(config, cfg), dict):
# 不进行unicode转义并格式化
reply_str += "{}: {}\n".format(cfg,
json.dumps(getattr(config, cfg),
ensure_ascii=False, indent=4))
else:
reply_str += "{}: {}\n".format(cfg, getattr(config, cfg))
reply = [reply_str]
elif cfg_name in dir(config):
if len(params) == 1:
# 按照配置项类型进行格式化
if isinstance(getattr(config, cfg_name), str):
reply_str = "[bot]配置项{}: \"{}\"\n".format(cfg_name, getattr(config, cfg_name))
elif isinstance(getattr(config, cfg_name), dict):
reply_str = "[bot]配置项{}: {}\n".format(cfg_name,
json.dumps(getattr(config, cfg_name),
ensure_ascii=False, indent=4))
else:
reply_str = "[bot]配置项{}: {}\n".format(cfg_name, getattr(config, cfg_name))
reply = [reply_str]
else:
cfg_value = " ".join(params[1:])
# 类型转换如果是json则转换为字典
if cfg_value == 'true':
cfg_value = True
elif cfg_value == 'false':
cfg_value = False
elif cfg_value.isdigit():
cfg_value = int(cfg_value)
elif cfg_value.startswith('{') and cfg_value.endswith('}'):
cfg_value = json.loads(cfg_value)
else:
try:
cfg_value = float(cfg_value)
except ValueError:
pass
# 检查类型是否匹配
if isinstance(getattr(config, cfg_name), type(cfg_value)):
setattr(config, cfg_name, cfg_value)
pkg.utils.context.set_config(config)
reply = ["[bot]配置项{}修改成功".format(cfg_name)]
else:
reply = ["[bot]err:配置项{}类型不匹配".format(cfg_name)]
else:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
return reply
@command(
"cfg",
"配置文件相关操作",
"!cfg all\n!cfg <配置项名称>\n!cfg <配置项名称> <配置项新值>",
[],
True
)
def cmd_cfg(cmd: str, params: list, session_name: str,
text_message: str, launcher_type: str, launcher_id: int,
sender_id: int, is_admin: bool) -> list:
"""配置文件相关操作"""
reply = config_operation(cmd, params)
return reply

View File

@ -4,6 +4,7 @@ import json
import datetime
import os
import threading
import traceback
import pkg.openai.session
import pkg.openai.manager
@ -12,151 +13,11 @@ import pkg.utils.updater
import pkg.utils.context
import pkg.qqbot.message
import pkg.utils.credit as credit
import pkg.qqbot.cmds.model as cmdmodel
from mirai import Image
def config_operation(cmd, params):
reply = []
config = pkg.utils.context.get_config()
reply_str = ""
if len(params) == 0:
reply = ["[bot]err:请输入配置项"]
else:
cfg_name = params[0]
if cfg_name == 'all':
reply_str = "[bot]所有配置项:\n\n"
for cfg in dir(config):
if not cfg.startswith('__') and not cfg == 'logging':
# 根据配置项类型进行格式化如果是字典则转换为json并格式化
if isinstance(getattr(config, cfg), str):
reply_str += "{}: \"{}\"\n".format(cfg, getattr(config, cfg))
elif isinstance(getattr(config, cfg), dict):
# 不进行unicode转义并格式化
reply_str += "{}: {}\n".format(cfg,
json.dumps(getattr(config, cfg),
ensure_ascii=False, indent=4))
else:
reply_str += "{}: {}\n".format(cfg, getattr(config, cfg))
reply = [reply_str]
elif cfg_name in dir(config):
if len(params) == 1:
# 按照配置项类型进行格式化
if isinstance(getattr(config, cfg_name), str):
reply_str = "[bot]配置项{}: \"{}\"\n".format(cfg_name, getattr(config, cfg_name))
elif isinstance(getattr(config, cfg_name), dict):
reply_str = "[bot]配置项{}: {}\n".format(cfg_name,
json.dumps(getattr(config, cfg_name),
ensure_ascii=False, indent=4))
else:
reply_str = "[bot]配置项{}: {}\n".format(cfg_name, getattr(config, cfg_name))
reply = [reply_str]
else:
cfg_value = " ".join(params[1:])
# 类型转换如果是json则转换为字典
if cfg_value == 'true':
cfg_value = True
elif cfg_value == 'false':
cfg_value = False
elif cfg_value.isdigit():
cfg_value = int(cfg_value)
elif cfg_value.startswith('{') and cfg_value.endswith('}'):
cfg_value = json.loads(cfg_value)
else:
try:
cfg_value = float(cfg_value)
except ValueError:
pass
# 检查类型是否匹配
if isinstance(getattr(config, cfg_name), type(cfg_value)):
setattr(config, cfg_name, cfg_value)
pkg.utils.context.set_config(config)
reply = ["[bot]配置项{}修改成功".format(cfg_name)]
else:
reply = ["[bot]err:配置项{}类型不匹配".format(cfg_name)]
else:
reply = ["[bot]err:未找到配置项 {}".format(cfg_name)]
return reply
def plugin_operation(cmd, params, is_admin):
reply = []
import pkg.plugin.host as plugin_host
import pkg.utils.updater as updater
plugin_list = plugin_host.__plugins__
if len(params) == 0:
reply_str = "[bot]所有插件({}):\n".format(len(plugin_host.__plugins__))
idx = 0
for key in plugin_host.iter_plugins_name():
plugin = plugin_list[key]
reply_str += "\n#{} {} {}\n{}\nv{}\n作者: {}\n"\
.format((idx+1), plugin['name'],
"[已禁用]" if not plugin['enabled'] else "",
plugin['description'],
plugin['version'], plugin['author'])
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
remote_url = updater.get_remote_url("/".join(plugin['path'].split('/')[:-1]))
if remote_url != "https://github.com/RockChinQ/QChatGPT" and remote_url != "https://gitee.com/RockChin/QChatGPT":
reply_str += "源码: "+remote_url+"\n"
idx += 1
reply = [reply_str]
elif params[0] == 'update':
# 更新所有插件
if is_admin:
def closure():
import pkg.utils.context
updated = []
for key in plugin_list:
plugin = plugin_list[key]
if updater.is_repo("/".join(plugin['path'].split('/')[:-1])):
success = updater.pull_latest("/".join(plugin['path'].split('/')[:-1]))
if success:
updated.append(plugin['name'])
# 检查是否有requirements.txt
pkg.utils.context.get_qqbot_manager().notify_admin("正在安装依赖...")
for key in plugin_list:
plugin = plugin_list[key]
if os.path.exists("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt"):
logging.info("{}检测到requirements.txt安装依赖".format(plugin['name']))
import pkg.utils.pkgmgr
pkg.utils.pkgmgr.install_requirements("/".join(plugin['path'].split('/')[:-1])+"/requirements.txt")
import main
main.reset_logging()
pkg.utils.context.get_qqbot_manager().notify_admin("已更新插件: {}".format(", ".join(updated)))
threading.Thread(target=closure).start()
reply = ["[bot]正在更新所有插件,请勿重复发起..."]
else:
reply = ["[bot]err:权限不足"]
elif params[0].startswith("http"):
if is_admin:
def closure():
try:
plugin_host.install_plugin(params[0])
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装成功,请发送 !reload 指令重载插件")
except Exception as e:
logging.error("插件安装失败:{}".format(e))
pkg.utils.context.get_qqbot_manager().notify_admin("插件安装失败:{}".format(e))
threading.Thread(target=closure, args=()).start()
reply = ["[bot]正在安装插件..."]
else:
reply = ["[bot]err:权限不足,请使用管理员账号私聊发起"]
return reply
def process_command(session_name: str, text_message: str, mgr, config,
launcher_type: str, launcher_id: int, sender_id: int, is_admin: bool) -> list:
@ -169,188 +30,30 @@ def process_command(session_name: str, text_message: str, mgr, config,
cmd = text_message[1:].strip().split(' ')[0]
params = text_message[1:].strip().split(' ')[1:]
if cmd == 'help':
reply = ["[bot]" + config.help_message]
elif cmd == 'reset':
if len(params) == 0:
pkg.openai.session.get_session(session_name).reset(explicit=True)
reply = ["[bot]会话已重置"]
else:
pkg.openai.session.get_session(session_name).reset(explicit=True, use_prompt=params[0])
reply = ["[bot]会话已重置,使用场景预设:{}".format(params[0])]
elif cmd == 'last':
result = pkg.openai.session.get_session(session_name).last_session()
if result is None:
reply = ["[bot]没有前一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到前一次的对话:\n创建时间:{}\n".format(datetime_str)]
elif cmd == 'next':
result = pkg.openai.session.get_session(session_name).next_session()
if result is None:
reply = ["[bot]没有后一次的对话"]
else:
datetime_str = datetime.datetime.fromtimestamp(result.create_timestamp).strftime(
'%Y-%m-%d %H:%M:%S')
reply = ["[bot]已切换到后一次的对话:\n创建时间:{}\n".format(datetime_str)]
elif cmd == 'prompt':
msgs = ""
session:list = pkg.openai.session.get_session(session_name).prompt
for msg in session:
if len(params) != 0 and params[0] in ['-all', '-a']:
msgs = msgs + "{}: {}\n\n".format(msg['role'], msg['content'])
elif len(msg['content']) > 30:
msgs = msgs + "[{}]: {}...\n\n".format(msg['role'], msg['content'][:30])
else:
msgs = msgs + "[{}]: {}\n\n".format(msg['role'], msg['content'])
reply = ["[bot]当前对话所有内容:\n{}".format(msgs)]
elif cmd == 'list':
pkg.openai.session.get_session(session_name).persistence()
page = 0
if len(params) > 0:
try:
page = int(params[0])
except ValueError:
pass
# 把!~开头的转换成!cfg
if cmd.startswith('~'):
params = [cmd[1:]] + params
cmd = 'cfg'
results = pkg.openai.session.get_session(session_name).list_history(page=page)
if len(results) == 0:
reply = ["[bot]第{}页没有历史会话".format(page)]
else:
reply_str = "[bot]历史会话 第{}页:\n".format(page)
current = -1
for i in range(len(results)):
# 时间(使用create_timestamp转换) 序号 部分内容
datetime_obj = datetime.datetime.fromtimestamp(results[i]['create_timestamp'])
msg = ""
try:
msg = json.loads(results[i]['prompt'])
except json.decoder.JSONDecodeError:
msg = pkg.openai.session.reset_session_prompt(session_name, results[i]['prompt'])
# 持久化
pkg.openai.session.get_session(session_name).persistence()
if len(msg) >= 2:
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),
msg[1]['content'])
else:
reply_str += "#{} 创建:{} {}\n".format(i + page * 10,
datetime_obj.strftime("%Y-%m-%d %H:%M:%S"),
"无内容")
if results[i]['create_timestamp'] == pkg.openai.session.get_session(
session_name).create_timestamp:
current = i + page * 10
reply_str += "\n以上信息倒序排列"
if current != -1:
reply_str += ",当前会话是 #{}\n".format(current)
else:
reply_str += ",当前处于全新会话或不在此页"
reply = [reply_str]
elif cmd == 'resend':
session = pkg.openai.session.get_session(session_name)
to_send = session.undo()
reply = pkg.qqbot.message.process_normal_message(to_send, mgr, config,
launcher_type, launcher_id, sender_id)
elif cmd == 'usage':
reply_str = "[bot]各api-key使用情况:\n\n"
api_keys = pkg.utils.context.get_openai_manager().key_mgr.api_key
for key_name in api_keys:
text_length = pkg.utils.context.get_openai_manager().audit_mgr \
.get_text_length_of_key(api_keys[key_name])
image_count = pkg.utils.context.get_openai_manager().audit_mgr \
.get_image_count_of_key(api_keys[key_name])
reply_str += "{}:\n - 文本长度:{}\n - 图片数量:{}\n".format(key_name, int(text_length),
int(image_count))
# 获取此key的额度
try:
credit_data = credit.fetch_credit_data(api_keys[key_name])
reply_str += " - 使用额度:{:.2f}/{:.2f}\n".format(credit_data['total_used'],credit_data['total_granted'])
except Exception as e:
logging.warning("获取额度失败:{}".format(e))
reply = [reply_str]
elif cmd == 'draw':
if len(params) == 0:
reply = ["[bot]err:请输入图片描述文字"]
else:
session = pkg.openai.session.get_session(session_name)
res = session.draw_image(" ".join(params))
logging.debug("draw_image result:{}".format(res))
reply = [Image(url=res['data'][0]['url'])]
if not (hasattr(config, 'include_image_description')
and not config.include_image_description):
reply.append(" ".join(params))
elif cmd == 'version':
reply_str = "[bot]当前版本:\n{}\n".format(pkg.utils.updater.get_current_version_info())
try:
if pkg.utils.updater.is_new_version_available():
reply_str += "\n有新版本可用,请使用命令 !update 进行更新"
except:
pass
reply = [reply_str]
elif cmd == 'plugin':
reply = plugin_operation(cmd, params, is_admin)
elif cmd == 'default':
if len(params) == 0:
# 输出目前所有情景预设
import pkg.openai.dprompt as dprompt
reply_str = "[bot]当前所有情景预设:\n\n"
for key,value in dprompt.get_prompt_dict().items():
reply_str += " - {}: {}\n".format(key,value)
reply_str += "\n当前默认情景预设:{}\n".format(dprompt.get_current())
reply_str += "请使用!default <情景预设>来设置默认情景预设"
reply = [reply_str]
elif len(params) >0 and is_admin:
# 设置默认情景
import pkg.openai.dprompt as dprompt
try:
dprompt.set_current(params[0])
reply = ["[bot]已设置默认情景预设为:{}".format(dprompt.get_current())]
except KeyError:
reply = ["[bot]err: 未找到情景预设:{}".format(params[0])]
else:
reply = ["[bot]err: 仅管理员可设置默认情景预设"]
elif cmd == 'reload' and is_admin:
def reload_task():
pkg.utils.reloader.reload_all()
threading.Thread(target=reload_task, daemon=True).start()
elif cmd == 'update' and is_admin:
def update_task():
try:
if pkg.utils.updater.update_all():
pkg.utils.reloader.reload_all(notify=False)
pkg.utils.context.get_qqbot_manager().notify_admin("更新完成")
else:
pkg.utils.context.get_qqbot_manager().notify_admin("无新版本")
except Exception as e0:
pkg.utils.context.get_qqbot_manager().notify_admin("更新失败:{}".format(e0))
return
threading.Thread(target=update_task, daemon=True).start()
reply = ["[bot]正在更新,请耐心等待,请勿重复发起更新..."]
elif cmd == 'cfg' and is_admin:
reply = config_operation(cmd, params)
# 选择指令处理函数
cmd_obj = cmdmodel.search(cmd)
if cmd_obj is not None and (cmd_obj['admin_only'] is False or is_admin):
cmd_func = cmd_obj['func']
reply = cmd_func(
cmd=cmd,
params=params,
session_name=session_name,
text_message=text_message,
launcher_type=launcher_type,
launcher_id=launcher_id,
sender_id=sender_id,
is_admin=is_admin,
)
else:
if cmd.startswith("~") and is_admin:
config_item = cmd[1:]
params = [config_item] + params
reply = config_operation("cfg", params)
else:
reply = ["[bot]err:未知的指令或权限不足: " + cmd]
reply = ["[bot]err:未知的指令或权限不足: " + cmd]
return reply
except Exception as e:
mgr.notify_admin("{}指令执行失败:{}".format(session_name, e))
logging.exception(e)

View File

@ -1,6 +1,5 @@
# 普通消息处理模块
import logging
import time
import openai
import pkg.utils.context
import pkg.openai.session
@ -64,7 +63,7 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = event.get_return_value("reply")
if not event.is_prevented_default():
reply = blob.check_text(prefix + text)
reply = [prefix + text]
except openai.error.APIConnectionError as e:
err_msg = str(e)
if err_msg.__contains__('Error communicating with OpenAI'):
@ -117,8 +116,7 @@ def process_normal_message(text_message: str, mgr, config, launcher_type: str,
reply = handle_exception("{}会话调用API失败:{}".format(session_name, e),
"[bot]err:RateLimitError,请重试或联系作者,或等待修复")
except openai.error.InvalidRequestError as e:
reply = handle_exception("{}API调用参数错误:{}\n\n这可能是由于config.py中的prompt_submit_length参数或"
"completion_api_params中的max_tokens参数数值过大导致的请尝试将其降低".format(
reply = handle_exception("{}API调用参数错误:{}\n".format(
session_name, e), "[bot]err:API调用参数错误请联系管理员或等待修复")
except openai.error.ServiceUnavailableError as e:
reply = handle_exception("{}API调用服务不可用:{}".format(session_name, e), "[bot]err:API调用服务不可用请重试或联系管理员或等待修复")

View File

@ -26,6 +26,7 @@ import pkg.plugin.host as plugin_host
import pkg.plugin.models as plugin_models
import pkg.qqbot.ignore as ignore
import pkg.qqbot.banlist as banlist
import pkg.qqbot.blob as blob
processing = []
@ -157,6 +158,7 @@ def process_message(launcher_type: str, launcher_id: int, text_message: str, mes
reply[0][:min(100, len(reply[0]))] + (
"..." if len(reply[0]) > 100 else "")))
reply = [mgr.reply_filter.process(reply[0])]
reply = blob.check_text(reply[0])
else:
logging.info("回复[{}]消息".format(session_name))

47
pkg/utils/announcement.py Normal file
View File

@ -0,0 +1,47 @@
import base64
import os
import requests
import pkg.utils.network as network
def read_latest() -> str:
resp = requests.get(
url="https://api.github.com/repos/RockChinQ/QChatGPT/contents/res/announcement",
proxies=network.wrapper_proxies()
)
obj_json = resp.json()
b64_content = obj_json["content"]
# 解码
content = base64.b64decode(b64_content).decode("utf-8")
return content
def read_saved() -> str:
# 已保存的在res/announcement_saved
# 检查是否存在
if not os.path.exists("res/announcement_saved"):
with open("res/announcement_saved", "w") as f:
f.write("")
with open("res/announcement_saved", "r") as f:
content = f.read()
return content
def write_saved(content: str):
# 已保存的在res/announcement_saved
with open("res/announcement_saved", "w") as f:
f.write(content)
def fetch_new() -> str:
latest = read_latest()
saved = read_saved()
if latest.replace(saved, "").strip() == "":
return ""
else:
write_saved(latest)
return latest.replace(saved, "").strip()

File diff suppressed because one or more lines are too long

View File

@ -1,13 +1,19 @@
# OpenAI账号免费额度剩余查询
import requests
def fetch_credit_data(api_key: str) -> dict:
def fetch_credit_data(api_key: str, http_proxy: str) -> dict:
"""OpenAI账号免费额度剩余查询"""
proxies = {
"http":http_proxy,
"https":http_proxy
} if http_proxy is not None else None
resp = requests.get(
url="https://api.openai.com/dashboard/billing/credit_grants",
headers={
"Authorization": "Bearer {}".format(api_key),
}
},
proxies=proxies
)
return resp.json()

9
pkg/utils/network.py Normal file
View File

@ -0,0 +1,9 @@
def wrapper_proxies() -> dict:
"""获取代理"""
import config
return {
"http": config.openai_config['proxy'],
"https": config.openai_config['proxy']
} if 'proxy' in config.openai_config and (config.openai_config['proxy'] is not None) else None

View File

@ -6,6 +6,7 @@ import requests
import json
import pkg.utils.constants
import pkg.utils.network as network
def check_dulwich_closure():
@ -36,7 +37,8 @@ def pull_latest(repo_path: str) -> bool:
def get_release_list() -> list:
"""获取发行列表"""
rls_list_resp = requests.get(
url="https://api.github.com/repos/RockChinQ/QChatGPT/releases"
url="https://api.github.com/repos/RockChinQ/QChatGPT/releases",
proxies=network.wrapper_proxies()
)
rls_list = rls_list_resp.json()
@ -83,7 +85,10 @@ def update_all(cli: bool = False) -> bool:
else:
print("开始下载最新版本: {}".format(latest_rls['zipball_url']))
zip_url = latest_rls['zipball_url']
zip_resp = requests.get(url=zip_url)
zip_resp = requests.get(
url=zip_url,
proxies=network.wrapper_proxies()
)
zip_data = zip_resp.content
# 检查temp/updater目录
@ -126,6 +131,15 @@ def update_all(cli: bool = False) -> bool:
dst = src.replace(source_root, ".")
if os.path.exists(dst):
os.remove(dst)
# 检查目标文件夹是否存在
if not os.path.exists(os.path.dirname(dst)):
os.makedirs(os.path.dirname(dst))
# 检查目标文件是否存在
if not os.path.exists(dst):
# 创建目标文件
open(dst, "w").close()
shutil.copy(src, dst)
# 把current_tag写入文件

View File

@ -1,5 +1,5 @@
requests~=2.28.1
openai~=0.27.0
openai~=0.27.2
dulwich~=0.21.3
colorlog~=6.6.0
yiri-mirai~=0.2.6.1

1
res/announcement Normal file
View File

@ -0,0 +1 @@

View File

@ -0,0 +1,12 @@
{
"prompt": [
{
"role": "system",
"content": "You are a helpful assistant. 如果我需要帮助,你要说“输入!help获得帮助”"
},
{
"role": "assistant",
"content": "好的我是一个能干的AI助手。 如果你需要帮助,我会说“输入!help获得帮助”"
}
]
}

View File

View File