qingxu99 commited on
Commit
315e78e
2 Parent(s): b6b4ba6 8fd21fe

Merge branch 'master' into huggingface

Browse files
README.md CHANGED
@@ -18,9 +18,9 @@ pinned: false
18
  > `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
19
  >
20
 
21
- # <img src="docs/logo.png" width="40" > ChatGPT 学术优化
22
 
23
- **如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requests**
24
 
25
  If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
26
 
@@ -38,25 +38,25 @@ If you like this project, please give it a Star. If you've come up with more use
38
  --- | ---
39
  一键润色 | 支持一键润色、一键查找论文语法错误
40
  一键中英互译 | 一键中英互译
41
- 一键代码解释 | 可以正确显示代码、解释代码
42
  [自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
43
- [配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等,秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
44
  模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
45
  [自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
46
  [程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
47
- 读论文 | [函数插件] 一键解读latex论文全文并生成摘要
48
  Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
49
  批量注释生成 | [函数插件] 一键批量生成函数注释
50
- chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
51
  Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
52
- [arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
53
  [PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
 
54
  [谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
 
55
  公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
56
  多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
57
  启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
58
  [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
59
- huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
60
  …… | ……
61
 
62
  </div>
@@ -93,9 +93,6 @@ huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/
93
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
94
  </div>
95
 
96
- 多种大语言模型混合调用[huggingface测试版](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta)(huggingface版不支持chatglm)
97
-
98
-
99
  ---
100
 
101
  ## 安装-方法1:直接运行 (Windows, Linux or MacOS)
@@ -106,20 +103,16 @@ git clone https://github.com/binary-husky/chatgpt_academic.git
106
  cd chatgpt_academic
107
  ```
108
 
109
- 2. 配置API_KEY和代理设置
 
 
110
 
111
- 在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下
112
- ```
113
- 1. 如果你在国内,需要设置海外代理才能够顺利使用OpenAI API,设置方法请仔细阅读config.py(1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies)。
114
- 2. 配置 OpenAI API KEY。支持任意数量的OpenAI的密钥和API2D的密钥共存/负载均衡,多个KEY用英文逗号分隔即可,例如输入 API_KEY="OpenAI密钥1,API2D密钥2,OpenAI密钥3,OpenAI密钥4"
115
- 3. 与代理网络有关的issue(网络超时、代理不起作用)汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1
116
- ```
117
  (P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。)
118
 
119
 
120
  3. 安装依赖
121
  ```sh
122
- # (选择I: 如熟悉python)推荐
123
  python -m pip install -r requirements.txt
124
  # 备注:使用官方pip源或者阿里pip源,其他pip源(如一些大学的pip)有可能出问题,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
125
 
@@ -141,14 +134,8 @@ python main.py
141
 
142
  5. 测试函数插件
143
  ```
144
- - 测试Python项目分析
145
- (选择1)input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "解析整个Python项目"
146
- (选择2)展开文件上传区,将python文件/包含python文件的压缩包拖拽进去,在出现反馈提示后, 然后点击 "解析整个Python项目"
147
- - 测试自我代码解读(本项目自译解)
148
- 点击 "[多线程Demo] 解析此项目本身(源码自译解)"
149
  - 测试函数插件模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能
150
  点击 "[函数插件模板Demo] 历史上的今天"
151
- - 函数插件区下拉菜单中有更多功能可供选择
152
  ```
153
 
154
  ## 安装-方法2:使用Docker
@@ -159,7 +146,7 @@ python main.py
159
  # 下载项目
160
  git clone https://github.com/binary-husky/chatgpt_academic.git
161
  cd chatgpt_academic
162
- # 配置 “海外Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
163
  用任意文本编辑器编辑 config.py
164
  # 安装
165
  docker build -t gpt-academic .
@@ -182,26 +169,20 @@ docker run --rm -it --net=host --gpus=all gpt-academic
182
  docker run --rm -it --net=host --gpus=all gpt-academic bash
183
  ```
184
 
 
185
 
186
- ## 安装-方法3:其他部署方式(需要云服务器知识与经验)
 
187
 
188
- 1. 远程云服务器部署
189
  请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
190
 
191
- 2. 使用WSL2(Windows Subsystem for Linux 子系统)
192
  请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
193
 
194
- 3. 如何在二级网址(如`http://localhost/subpath`)下运行
195
  请访问[FastAPI运行说明](docs/WithFastapi.md)
196
 
197
- ## 安装-代理配置
198
- 1. 常规方法
199
- [配置代理](https://github.com/binary-husky/chatgpt_academic/issues/1)
200
-
201
- 2. 纯新手教程
202
- [纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
203
-
204
-
205
  ---
206
 
207
  ## 自定义新的便捷按钮 / 自定义函数插件
@@ -228,74 +209,48 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
228
  本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
229
  详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
230
 
231
-
232
  ---
233
 
 
234
 
235
- ## 部分功能展示
236
-
237
- 1. 图片显示:
238
-
239
  <div align="center">
240
- <img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
241
  </div>
242
 
243
- 2. 本项目的代码自译解(如果一个程序能够读懂并剖析自己):
244
-
245
- <div align="center">
246
- <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
247
- </div>
248
-
249
- <div align="center">
250
- <img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
251
- </div>
252
 
253
- 3. 其他任意Python/Cpp/Java/Go/Rect/...项目剖析:
254
- <div align="center">
255
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
256
- </div>
257
-
258
- <div align="center">
259
- <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
260
- </div>
261
-
262
- 4. Latex论文一键阅读理解与摘要生成
263
- <div align="center">
264
- <img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
265
- </div>
266
-
267
- 5. 自动报告生成
268
  <div align="center">
269
  <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
270
  <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
271
  <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
272
  </div>
273
 
274
- 6. 模块化功能设计
275
  <div align="center">
276
  <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
277
  <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
278
  </div>
279
 
280
-
281
- 7. 源代码转译英文
282
-
283
  <div align="center">
284
- <img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
285
  </div>
286
 
287
- 8. 互联网在线信息综合
288
-
289
  <div align="center">
290
- <img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
291
- <img src="https://user-images.githubusercontent.com/96192199/233779501-5ce826f0-6cca-4d59-9e5f-b4eacb8cc15f.png" width="800" >
292
-
293
  </div>
294
 
 
 
 
295
 
296
-
297
- ## Todo 与 版本规划:
298
- - version 3.3+ (todo): NewBing支持
 
299
  - version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
300
  - version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
301
  - version 3.0: 对chatglm和其他小型llm的支持
@@ -308,6 +263,9 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
308
  - version 2.0: 引入模块化函数插件
309
  - version 1.0: 基础功能
310
 
 
 
 
311
  ## 参考与学习
312
 
313
  ```
 
18
  > `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
19
  >
20
 
21
+ # <img src="docs/logo.png" width="40" > GPT 学术优化 (ChatGPT Academic)
22
 
23
+ **如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发pull requests**
24
 
25
  If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
26
 
 
38
  --- | ---
39
  一键润色 | 支持一键润色、一键查找论文语法错误
40
  一键中英互译 | 一键中英互译
41
+ 一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
42
  [自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
 
43
  模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
44
  [自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
45
  [程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
46
+ 读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
47
  Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
48
  批量注释生成 | [函数插件] 一键批量生成函数注释
 
49
  Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
50
+ chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
51
  [PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
52
+ [Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
53
  [谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
54
+ 互联网信息聚合+GPT | [函数插件] 一键[让GPT先从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck),再回答问题,让信息永不过时
55
  公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
56
  多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
57
  启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
58
  [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
59
+ 更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 新加入Newbing测试接口(新必应AI)
60
  …… | ……
61
 
62
  </div>
 
93
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
94
  </div>
95
 
 
 
 
96
  ---
97
 
98
  ## 安装-方法1:直接运行 (Windows, Linux or MacOS)
 
103
  cd chatgpt_academic
104
  ```
105
 
106
+ 2. 配置API_KEY
107
+
108
+ 在`config.py`中,配置API KEY等[设置](https://github.com/binary-husky/gpt_academic/issues/1) 。
109
 
 
 
 
 
 
 
110
  (P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。)
111
 
112
 
113
  3. 安装依赖
114
  ```sh
115
+ # (选择I: 如熟悉python)(python版本3.9以上,越新越好)
116
  python -m pip install -r requirements.txt
117
  # 备注:使用官方pip源或者阿里pip源,其他pip源(如一些大学的pip)有可能出问题,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
118
 
 
134
 
135
  5. 测试函数插件
136
  ```
 
 
 
 
 
137
  - 测试函数插件模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能
138
  点击 "[函数插件模板Demo] 历史上的今天"
 
139
  ```
140
 
141
  ## 安装-方法2:使用Docker
 
146
  # 下载项目
147
  git clone https://github.com/binary-husky/chatgpt_academic.git
148
  cd chatgpt_academic
149
+ # 配置 Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
150
  用任意文本编辑器编辑 config.py
151
  # 安装
152
  docker build -t gpt-academic .
 
169
  docker run --rm -it --net=host --gpus=all gpt-academic bash
170
  ```
171
 
172
+ ## 安装-方法3:其他部署姿势
173
 
174
+ 1. 如何使用反代URL/AzureAPI
175
+ 按照`config.py`中的说明配置API_URL_REDIRECT即可。
176
 
177
+ 2. 远程云服务器部署(需要云服务器知识与经验)
178
  请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
179
 
180
+ 3. 使用WSL2(Windows Subsystem for Linux 子系统)
181
  请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
182
 
183
+ 4. 如何在二级网址(如`http://localhost/subpath`)下运行
184
  请访问[FastAPI运行说明](docs/WithFastapi.md)
185
 
 
 
 
 
 
 
 
 
186
  ---
187
 
188
  ## 自定义新的便捷按钮 / 自定义函数插件
 
209
  本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
210
  详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
211
 
 
212
  ---
213
 
214
+ ## 其他功能说明
215
 
216
+ 1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件,如图:
 
 
 
217
  <div align="center">
218
+ <img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
219
  </div>
220
 
221
+ 在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。
 
 
 
 
 
 
 
 
222
 
223
+ 2. 生成报告。大部分插件都会在执行结束后,生成工作报告
 
 
 
 
 
 
 
 
 
 
 
 
 
 
224
  <div align="center">
225
  <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
226
  <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
227
  <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
228
  </div>
229
 
230
+ 3. 模块化功能设计,简单的接口却能支持强大的功能
231
  <div align="center">
232
  <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
233
  <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
234
  </div>
235
 
236
+ 4. 这是一个能够“自我译解”的开源项目
 
 
237
  <div align="center">
238
+ <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
239
  </div>
240
 
241
+ 5. 译解其他开源项目,不在话下
 
242
  <div align="center">
243
+ <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
 
 
244
  </div>
245
 
246
+ <div align="center">
247
+ <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
248
+ </div>
249
 
250
+ ## 版本:
251
+ - version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级)
252
+ - version 3.4(Todo): 完善chatglm本地大模型的多线支持
253
+ - version 3.3: +互联网信息综合功能
254
  - version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
255
  - version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
256
  - version 3.0: 对chatglm和其他小型llm的支持
 
263
  - version 2.0: 引入模块化函数插件
264
  - version 1.0: 基础功能
265
 
266
+ gpt_academic开发者QQ群:734063350
267
+
268
+
269
  ## 参考与学习
270
 
271
  ```
check_proxy.py CHANGED
@@ -56,22 +56,24 @@ def patch_and_restart(path):
56
  """
57
  一键更新协议:覆盖和重启
58
  """
59
- import distutils
60
  import shutil
61
  import os
62
  import sys
63
  import time
 
64
  from colorful import print亮黄, print亮绿, print亮红
65
  # if not using config_private, move origin config.py as config_private.py
66
  if not os.path.exists('config_private.py'):
67
  print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
68
  '另外您可以随时在history子文件夹下找回旧版的程序。')
69
  shutil.copyfile('config.py', 'config_private.py')
70
- distutils.dir_util.copy_tree(path+'/chatgpt_academic-master', './')
71
- import subprocess
72
  print亮绿('代码已经更新,即将更新pip包依赖……')
73
  for i in reversed(range(5)): time.sleep(1); print(i)
74
  try:
 
75
  subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
76
  except:
77
  print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
 
56
  """
57
  一键更新协议:覆盖和重启
58
  """
59
+ from distutils import dir_util
60
  import shutil
61
  import os
62
  import sys
63
  import time
64
+ import glob
65
  from colorful import print亮黄, print亮绿, print亮红
66
  # if not using config_private, move origin config.py as config_private.py
67
  if not os.path.exists('config_private.py'):
68
  print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
69
  '另外您可以随时在history子文件夹下找回旧版的程序。')
70
  shutil.copyfile('config.py', 'config_private.py')
71
+ path_new_version = glob.glob(path + '/*-master')[0]
72
+ dir_util.copy_tree(path_new_version, './')
73
  print亮绿('代码已经更新,即将更新pip包依赖……')
74
  for i in reversed(range(5)): time.sleep(1); print(i)
75
  try:
76
+ import subprocess
77
  subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
78
  except:
79
  print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
config.py CHANGED
@@ -10,7 +10,7 @@ if USE_PROXY:
10
  # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
11
  # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
12
 
13
- # 代理网络的地址,打开你的科学上网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284)
14
  proxies = {
15
  # [协议]:// [地址] :[端口]
16
  "http": "socks5h://localhost:11284",
@@ -33,6 +33,7 @@ CODE_HIGHLIGHT = True
33
 
34
  # 窗口布局
35
  LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
 
36
 
37
  # 发送请求到OpenAI后,等待多久判定为超时
38
  TIMEOUT_SECONDS = 30
@@ -58,8 +59,16 @@ CONCURRENT_COUNT = 100
58
  AUTHENTICATION = []
59
 
60
  # 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
61
- # 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"}
 
 
62
  API_URL_REDIRECT = {}
63
 
64
  # 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
65
  CUSTOM_PATH = "/"
 
 
 
 
 
 
 
10
  # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
11
  # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
12
 
13
+ # 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284)
14
  proxies = {
15
  # [协议]:// [地址] :[端口]
16
  "http": "socks5h://localhost:11284",
 
33
 
34
  # 窗口布局
35
  LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
36
+ DARK_MODE = True # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
37
 
38
  # 发送请求到OpenAI后,等待多久判定为超时
39
  TIMEOUT_SECONDS = 30
 
59
  AUTHENTICATION = []
60
 
61
  # 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
62
+ # (高危设置!通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
63
+ # 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
64
+ # 例如 API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://ai.open.com/api/conversation"}
65
  API_URL_REDIRECT = {}
66
 
67
  # 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
68
  CUSTOM_PATH = "/"
69
+
70
+ # 如果需要使用newbing,把newbing的长长的cookie放到这里
71
+ NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
72
+ NEWBING_COOKIES = """
73
+ your bing cookies here
74
+ """
crazy_functional.py CHANGED
@@ -21,15 +21,22 @@ def get_crazy_functions():
21
  from crazy_functions.总结word文档 import 总结word文档
22
  from crazy_functions.解析JupyterNotebook import 解析ipynb文件
23
  from crazy_functions.对话历史存档 import 对话历史存档
 
 
 
 
24
  function_plugins = {
25
-
26
  "解析整个Python项目": {
27
  "Color": "stop", # 按钮颜色
28
  "Function": HotReload(解析一个Python项目)
29
  },
30
- "保存当前的对话": {
31
  "AsButton":False,
32
- "Function": HotReload(对话历史存档)
 
 
 
 
33
  },
34
  "[测试功能] 解析Jupyter Notebook文件": {
35
  "Color": "stop",
@@ -81,11 +88,21 @@ def get_crazy_functions():
81
  "Color": "stop", # 按钮颜色
82
  "Function": HotReload(读文章写摘要)
83
  },
 
 
 
 
 
84
  "批量生成函数注释": {
85
  "Color": "stop", # 按钮颜色
 
86
  "Function": HotReload(批量生成函数注释)
87
  },
 
 
 
88
  "[多线程Demo] 解析此项目本身(源码自译解)": {
 
89
  "Function": HotReload(解析项目本身)
90
  },
91
  "[多线程demo] 把本项目源代码切换成全英文": {
@@ -93,7 +110,7 @@ def get_crazy_functions():
93
  "AsButton": False, # 加入下拉菜单中
94
  "Function": HotReload(全项目切换英文)
95
  },
96
- "[函数插件模板Demo] 历史上的今天": {
97
  # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
98
  "Function": HotReload(高阶功能模板函数)
99
  },
@@ -110,7 +127,6 @@ def get_crazy_functions():
110
  from crazy_functions.Latex全文翻译 import Latex中译英
111
  from crazy_functions.Latex全文翻译 import Latex英译中
112
  from crazy_functions.批量Markdown翻译 import Markdown中译英
113
- from crazy_functions.批量Markdown翻译 import Markdown英译中
114
 
115
  function_plugins.update({
116
  "批量翻译PDF文档(多线程)": {
@@ -175,12 +191,7 @@ def get_crazy_functions():
175
  "AsButton": False, # 加入下拉菜单中
176
  "Function": HotReload(Markdown中译英)
177
  },
178
- "[测试功能] 批量Markdown英译中(输入路径或上传压缩包)": {
179
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
180
- "Color": "stop",
181
- "AsButton": False, # 加入下拉菜单中
182
- "Function": HotReload(Markdown英译中)
183
- },
184
 
185
  })
186
 
 
21
  from crazy_functions.总结word文档 import 总结word文档
22
  from crazy_functions.解析JupyterNotebook import 解析ipynb文件
23
  from crazy_functions.对话历史存档 import 对话历史存档
24
+ from crazy_functions.对话历史存档 import 载入对话历史存档
25
+ from crazy_functions.对话历史存档 import 删除所有本地对话历史记录
26
+
27
+ from crazy_functions.批量Markdown翻译 import Markdown英译中
28
  function_plugins = {
 
29
  "解析整个Python项目": {
30
  "Color": "stop", # 按钮颜色
31
  "Function": HotReload(解析一个Python项目)
32
  },
33
+ "载入对话历史存档": {
34
  "AsButton":False,
35
+ "Function": HotReload(载入对话历史存档)
36
+ },
37
+ "删除所有本地对话历史记录(请谨慎操作)": {
38
+ "AsButton":False,
39
+ "Function": HotReload(删除所有本地对话历史记录)
40
  },
41
  "[测试功能] 解析Jupyter Notebook文件": {
42
  "Color": "stop",
 
88
  "Color": "stop", # 按钮颜色
89
  "Function": HotReload(读文章写摘要)
90
  },
91
+ "Markdown/Readme英译中": {
92
+ # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
93
+ "Color": "stop",
94
+ "Function": HotReload(Markdown英译中)
95
+ },
96
  "批量生成函数注释": {
97
  "Color": "stop", # 按钮颜色
98
+ "AsButton": False, # 加入下拉菜单中
99
  "Function": HotReload(批量生成函数注释)
100
  },
101
+ "保存当前的对话": {
102
+ "Function": HotReload(对话历史存档)
103
+ },
104
  "[多线程Demo] 解析此项目本身(源码自译解)": {
105
+ "AsButton": False, # 加入下拉菜单中
106
  "Function": HotReload(解析项目本身)
107
  },
108
  "[多线程demo] 把本项目源代码切换成全英文": {
 
110
  "AsButton": False, # 加入下拉菜单中
111
  "Function": HotReload(全项目切换英文)
112
  },
113
+ "[插件demo] 历史上的今天": {
114
  # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
115
  "Function": HotReload(高阶功能模板函数)
116
  },
 
127
  from crazy_functions.Latex全文翻译 import Latex中译英
128
  from crazy_functions.Latex全文翻译 import Latex英译中
129
  from crazy_functions.批量Markdown翻译 import Markdown中译英
 
130
 
131
  function_plugins.update({
132
  "批量翻译PDF文档(多线程)": {
 
191
  "AsButton": False, # 加入下拉菜单中
192
  "Function": HotReload(Markdown中译英)
193
  },
194
+
 
 
 
 
 
195
 
196
  })
197
 
crazy_functions/crazy_utils.py CHANGED
@@ -1,5 +1,4 @@
1
- import traceback
2
- from toolbox import update_ui, get_conf
3
 
4
  def input_clipping(inputs, history, max_token_limit):
5
  import numpy as np
@@ -94,12 +93,12 @@ def request_gpt_model_in_new_thread_with_ui_alive(
94
  continue # 返回重试
95
  else:
96
  # 【选择放弃】
97
- tb_str = '```\n' + traceback.format_exc() + '```'
98
  mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
99
  return mutable[0] # 放弃
100
  except:
101
  # 【第三种情况】:其他错误:重试几次
102
- tb_str = '```\n' + traceback.format_exc() + '```'
103
  print(tb_str)
104
  mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
105
  if retry_op > 0:
@@ -173,7 +172,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
173
  if max_workers == -1: # 读取配置文件
174
  try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
175
  except: max_workers = 8
176
- if max_workers <= 0 or max_workers >= 20: max_workers = 8
177
  # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
178
  if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
179
  max_workers = 1
@@ -220,14 +219,14 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
220
  continue # 返回重试
221
  else:
222
  # 【选择放弃】
223
- tb_str = '```\n' + traceback.format_exc() + '```'
224
  gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
225
  if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
226
  mutable[index][2] = "输入过长已放弃"
227
  return gpt_say # 放弃
228
  except:
229
  # 【第三种情况】:其他错误
230
- tb_str = '```\n' + traceback.format_exc() + '```'
231
  print(tb_str)
232
  gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
233
  if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
@@ -564,3 +563,46 @@ def read_and_clean_pdf_text(fp):
564
  # print亮绿('***************************')
565
 
566
  return meta_txt, page_one_meta
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from toolbox import update_ui, get_conf, trimmed_format_exc
 
2
 
3
  def input_clipping(inputs, history, max_token_limit):
4
  import numpy as np
 
93
  continue # 返回重试
94
  else:
95
  # 【选择放弃】
96
+ tb_str = '```\n' + trimmed_format_exc() + '```'
97
  mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
98
  return mutable[0] # 放弃
99
  except:
100
  # 【第三种情况】:其他错误:重试几次
101
+ tb_str = '```\n' + trimmed_format_exc() + '```'
102
  print(tb_str)
103
  mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
104
  if retry_op > 0:
 
172
  if max_workers == -1: # 读取配置文件
173
  try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
174
  except: max_workers = 8
175
+ if max_workers <= 0: max_workers = 3
176
  # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
177
  if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
178
  max_workers = 1
 
219
  continue # 返回重试
220
  else:
221
  # 【选择放弃】
222
+ tb_str = '```\n' + trimmed_format_exc() + '```'
223
  gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
224
  if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
225
  mutable[index][2] = "输入过长已放弃"
226
  return gpt_say # 放弃
227
  except:
228
  # 【第三种情况】:其他错误
229
+ tb_str = '```\n' + trimmed_format_exc() + '```'
230
  print(tb_str)
231
  gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
232
  if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
 
563
  # print亮绿('***************************')
564
 
565
  return meta_txt, page_one_meta
566
+
567
+
568
+ def get_files_from_everything(txt, type): # type='.md'
569
+ """
570
+ 这个函数是用来获取指定目录下所有指定类型(如.md)的文件,并且对于网络上的文件,也可以获取它。
571
+ 下面是对每个参数和返回值的说明:
572
+ 参数
573
+ - txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。
574
+ - type: 字符串,表示要搜索的文件类型。默认是.md。
575
+ 返回值
576
+ - success: 布尔值,表示函数是否成功执行。
577
+ - file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
578
+ - project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
579
+ 该函数详细注释已添加,请确认是否满足您的需要。
580
+ """
581
+ import glob, os
582
+
583
+ success = True
584
+ if txt.startswith('http'):
585
+ # 网络的远程文件
586
+ import requests
587
+ from toolbox import get_conf
588
+ proxies, = get_conf('proxies')
589
+ r = requests.get(txt, proxies=proxies)
590
+ with open('./gpt_log/temp'+type, 'wb+') as f: f.write(r.content)
591
+ project_folder = './gpt_log/'
592
+ file_manifest = ['./gpt_log/temp'+type]
593
+ elif txt.endswith(type):
594
+ # 直接给定文件
595
+ file_manifest = [txt]
596
+ project_folder = os.path.dirname(txt)
597
+ elif os.path.exists(txt):
598
+ # 本地路径,递归搜索
599
+ project_folder = txt
600
+ file_manifest = [f for f in glob.glob(f'{project_folder}/**/*'+type, recursive=True)]
601
+ if len(file_manifest) == 0:
602
+ success = False
603
+ else:
604
+ project_folder = None
605
+ file_manifest = []
606
+ success = False
607
+
608
+ return success, file_manifest, project_folder
crazy_functions/对话历史存档.py CHANGED
@@ -1,7 +1,8 @@
1
  from toolbox import CatchException, update_ui
2
  from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
 
3
 
4
- def write_chat_to_file(chatbot, file_name=None):
5
  """
6
  将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
7
  """
@@ -11,20 +12,62 @@ def write_chat_to_file(chatbot, file_name=None):
11
  file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
12
  os.makedirs('./gpt_log/', exist_ok=True)
13
  with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
 
 
14
  for i, contents in enumerate(chatbot):
15
- for content in contents:
16
  try: # 这个bug没找到触发条件,暂时先这样顶一下
17
  if type(content) != str: content = str(content)
18
  except:
19
  continue
20
  f.write(content)
21
- f.write('\n\n')
 
22
  f.write('<hr color="red"> \n\n')
23
-
 
 
 
 
24
  res = '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}')
25
  print(res)
26
  return res
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  @CatchException
29
  def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
30
  """
@@ -37,6 +80,64 @@ def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_
37
  web_port 当前软件运行的端口号
38
  """
39
 
40
- chatbot.append(("保存当前对话", f"[Local Message] {write_chat_to_file(chatbot)}"))
 
41
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  from toolbox import CatchException, update_ui
2
  from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
3
+ import re
4
 
5
+ def write_chat_to_file(chatbot, history=None, file_name=None):
6
  """
7
  将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
8
  """
 
12
  file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
13
  os.makedirs('./gpt_log/', exist_ok=True)
14
  with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
15
+ from theme import advanced_css
16
+ f.write(f'<head><title>对话历史</title><style>{advanced_css}</style></head>')
17
  for i, contents in enumerate(chatbot):
18
+ for j, content in enumerate(contents):
19
  try: # 这个bug没找到触发条件,暂时先这样顶一下
20
  if type(content) != str: content = str(content)
21
  except:
22
  continue
23
  f.write(content)
24
+ if j == 0:
25
+ f.write('<hr style="border-top: dotted 3px #ccc;">')
26
  f.write('<hr color="red"> \n\n')
27
+ f.write('<hr color="blue"> \n\n raw chat context:\n')
28
+ f.write('<code>')
29
+ for h in history:
30
+ f.write("\n>>>" + h)
31
+ f.write('</code>')
32
  res = '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}')
33
  print(res)
34
  return res
35
 
36
+ def gen_file_preview(file_name):
37
+ try:
38
+ with open(file_name, 'r', encoding='utf8') as f:
39
+ file_content = f.read()
40
+ # pattern to match the text between <head> and </head>
41
+ pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
42
+ file_content = re.sub(pattern, '', file_content)
43
+ html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
44
+ history = history.strip('<code>')
45
+ history = history.strip('</code>')
46
+ history = history.split("\n>>>")
47
+ return list(filter(lambda x:x!="", history))[0][:100]
48
+ except:
49
+ return ""
50
+
51
+ def read_file_to_chat(chatbot, history, file_name):
52
+ with open(file_name, 'r', encoding='utf8') as f:
53
+ file_content = f.read()
54
+ # pattern to match the text between <head> and </head>
55
+ pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
56
+ file_content = re.sub(pattern, '', file_content)
57
+ html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
58
+ history = history.strip('<code>')
59
+ history = history.strip('</code>')
60
+ history = history.split("\n>>>")
61
+ history = list(filter(lambda x:x!="", history))
62
+ html = html.split('<hr color="red"> \n\n')
63
+ html = list(filter(lambda x:x!="", html))
64
+ chatbot.clear()
65
+ for i, h in enumerate(html):
66
+ i_say, gpt_say = h.split('<hr style="border-top: dotted 3px #ccc;">')
67
+ chatbot.append([i_say, gpt_say])
68
+ chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"])
69
+ return chatbot, history
70
+
71
  @CatchException
72
  def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
73
  """
 
80
  web_port 当前软件运行的端口号
81
  """
82
 
83
+ chatbot.append(("保存当前对话",
84
+ f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用“载入对话历史存档”还原当下的对话。\n警告!被保存的对话历史可以被使用该系统的任何人查阅。"))
85
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
86
 
87
+ def hide_cwd(str):
88
+ import os
89
+ current_path = os.getcwd()
90
+ replace_path = "."
91
+ return str.replace(current_path, replace_path)
92
+
93
+ @CatchException
94
+ def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
95
+ """
96
+ txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
97
+ llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
98
+ plugin_kwargs 插件模型的参数,暂时没有用武之地
99
+ chatbot 聊天显示框的句柄,用于显示给用户
100
+ history 聊天历史,前情提要
101
+ system_prompt 给gpt的静默提醒
102
+ web_port 当前软件运行的端口号
103
+ """
104
+ from .crazy_utils import get_files_from_everything
105
+ success, file_manifest, _ = get_files_from_everything(txt, type='.html')
106
+
107
+ if not success:
108
+ if txt == "": txt = '空空如也的输入栏'
109
+ import glob
110
+ local_history = "<br/>".join(["`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)])
111
+ chatbot.append([f"正在查找对话历史文件(html格式): {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
112
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
113
+ return
114
+
115
+ try:
116
+ chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0])
117
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
118
+ except:
119
+ chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"])
120
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
121
+ return
122
+
123
+ @CatchException
124
+ def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
125
+ """
126
+ txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
127
+ llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
128
+ plugin_kwargs 插件模型的参数,暂时没有用武之地
129
+ chatbot 聊天显示框的句柄,用于显示给用户
130
+ history 聊天历史,前情提要
131
+ system_prompt 给gpt的静默提醒
132
+ web_port 当前软件运行的端口号
133
+ """
134
+
135
+ import glob, os
136
+ local_history = "<br/>".join(["`"+hide_cwd(f)+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)])
137
+ for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True):
138
+ os.remove(f)
139
+ chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
140
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
141
+ return
142
+
143
+
crazy_functions/批量Markdown翻译.py CHANGED
@@ -84,7 +84,33 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
84
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
85
 
86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
 
 
88
 
89
 
90
  @CatchException
@@ -98,6 +124,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
98
  # 尝试导入依赖,如果缺少依赖,则给出安装建议
99
  try:
100
  import tiktoken
 
101
  except:
102
  report_execption(chatbot, history,
103
  a=f"解析项目: {txt}",
@@ -105,19 +132,21 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
105
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
106
  return
107
  history = [] # 清空历史,以免输入溢出
108
- import glob, os
109
- if os.path.exists(txt):
110
- project_folder = txt
111
- else:
 
112
  if txt == "": txt = '空空如也的输入栏'
113
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
114
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
115
  return
116
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
117
  if len(file_manifest) == 0:
118
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
119
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
120
  return
 
121
  yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh')
122
 
123
 
@@ -135,6 +164,7 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
135
  # 尝试导入依赖,如果缺少依赖,则给出安装建议
136
  try:
137
  import tiktoken
 
138
  except:
139
  report_execption(chatbot, history,
140
  a=f"解析项目: {txt}",
@@ -142,18 +172,13 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
142
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
143
  return
144
  history = [] # 清空历史,以免输入溢出
145
- import glob, os
146
- if os.path.exists(txt):
147
- project_folder = txt
148
- else:
149
  if txt == "": txt = '空空如也的输入栏'
150
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
151
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
152
  return
153
- if txt.endswith('.md'):
154
- file_manifest = [txt]
155
- else:
156
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
157
  if len(file_manifest) == 0:
158
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
159
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
 
84
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
85
 
86
 
87
+ def get_files_from_everything(txt):
88
+ import glob, os
89
+
90
+ success = True
91
+ if txt.startswith('http'):
92
+ # 网络的远程文件
93
+ txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
94
+ txt = txt.replace("/blob/", "/")
95
+ import requests
96
+ from toolbox import get_conf
97
+ proxies, = get_conf('proxies')
98
+ r = requests.get(txt, proxies=proxies)
99
+ with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content)
100
+ project_folder = './gpt_log/'
101
+ file_manifest = ['./gpt_log/temp.md']
102
+ elif txt.endswith('.md'):
103
+ # 直接给定文件
104
+ file_manifest = [txt]
105
+ project_folder = os.path.dirname(txt)
106
+ elif os.path.exists(txt):
107
+ # 本地路径,递归搜索
108
+ project_folder = txt
109
+ file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
110
+ else:
111
+ success = False
112
 
113
+ return success, file_manifest, project_folder
114
 
115
 
116
  @CatchException
 
124
  # 尝试导入依赖,如果缺少依赖,则给出安装建议
125
  try:
126
  import tiktoken
127
+ import glob, os
128
  except:
129
  report_execption(chatbot, history,
130
  a=f"解析项目: {txt}",
 
132
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
133
  return
134
  history = [] # 清空历史,以免输入溢出
135
+
136
+ success, file_manifest, project_folder = get_files_from_everything(txt)
137
+
138
+ if not success:
139
+ # 什么都没有
140
  if txt == "": txt = '空空如也的输入栏'
141
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
142
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
143
  return
144
+
145
  if len(file_manifest) == 0:
146
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
147
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
148
  return
149
+
150
  yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh')
151
 
152
 
 
164
  # 尝试导入依赖,如果缺少依赖,则给出安装建议
165
  try:
166
  import tiktoken
167
+ import glob, os
168
  except:
169
  report_execption(chatbot, history,
170
  a=f"解析项目: {txt}",
 
172
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
173
  return
174
  history = [] # 清空历史,以免输入溢出
175
+ success, file_manifest, project_folder = get_files_from_everything(txt)
176
+ if not success:
177
+ # 什么都没有
 
178
  if txt == "": txt = '空空如也的输入栏'
179
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
180
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
181
  return
 
 
 
 
182
  if len(file_manifest) == 0:
183
  report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
184
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
crazy_functions/解析项目源代码.py CHANGED
@@ -1,5 +1,6 @@
1
  from toolbox import update_ui
2
  from toolbox import CatchException, report_execption, write_results_to_file
 
3
 
4
  def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
5
  import os, copy
@@ -61,13 +62,15 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
61
  previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
62
  previous_iteration_files_string = ', '.join(previous_iteration_files)
63
  current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
64
- i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string})。'
65
  inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
66
  this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
67
  this_iteration_history.append(last_iteration_result)
 
 
68
  result = yield from request_gpt_model_in_new_thread_with_ui_alive(
69
- inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
70
- history=this_iteration_history, # 迭代之前的分析
71
  sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
72
  report_part_2.extend([i_say, result])
73
  last_iteration_result = result
 
1
  from toolbox import update_ui
2
  from toolbox import CatchException, report_execption, write_results_to_file
3
+ from .crazy_utils import input_clipping
4
 
5
  def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
6
  import os, copy
 
62
  previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
63
  previous_iteration_files_string = ', '.join(previous_iteration_files)
64
  current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
65
+ i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。'
66
  inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
67
  this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
68
  this_iteration_history.append(last_iteration_result)
69
+ # 裁剪input
70
+ inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
71
  result = yield from request_gpt_model_in_new_thread_with_ui_alive(
72
+ inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
73
+ history=this_iteration_history_feed, # 迭代之前的分析
74
  sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
75
  report_part_2.extend([i_say, result])
76
  last_iteration_result = result
docs/Dockerfile+ChatGLM CHANGED
@@ -1,6 +1,6 @@
1
  # How to build | 如何构建: docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
2
- # How to run | 如何运行 (1) 直接运行(选择0号GPU): docker run --rm -it --net=host --gpus="0" gpt-academic
3
- # How to run | 如何运行 (2) 我想运行之前进容器做一些调整: docker run --rm -it --net=host --gpus="0" gpt-academic bash
4
 
5
  # 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
6
  FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
@@ -14,6 +14,7 @@ RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
14
  RUN $useProxyNetwork curl cip.cc
15
  RUN sed -i '$ d' /etc/proxychains.conf
16
  RUN sed -i '$ d' /etc/proxychains.conf
 
17
  RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
18
  ARG useProxyNetwork=proxychains
19
  # # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
@@ -21,14 +22,15 @@ ARG useProxyNetwork=proxychains
21
 
22
  # use python3 as the system default python
23
  RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
24
-
 
25
  # 下载分支
26
  WORKDIR /gpt
27
  RUN $useProxyNetwork git clone https://github.com/binary-husky/chatgpt_academic.git
28
  WORKDIR /gpt/chatgpt_academic
29
  RUN $useProxyNetwork python3 -m pip install -r requirements.txt
30
  RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
31
- RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
32
 
33
  # 预热CHATGLM参数(非必要 可选步骤)
34
  RUN echo ' \n\
@@ -48,6 +50,7 @@ RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
48
  # 可同时填写多个API-KEY,支持openai的key和api2d的key共存,用英文逗号分割,例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
49
  # LLM_MODEL 是选择初始的模型
50
  # LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备,可选 cpu 和 cuda
 
51
  RUN echo ' \n\
52
  API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
53
  USE_PROXY = True \n\
 
1
  # How to build | 如何构建: docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
2
+ # How to run | (1) 我想直接一键运行(选择0号GPU): docker run --rm -it --net=host --gpus \"device=0\" gpt-academic
3
+ # How to run | (2) 我想运行之前进容器做一些调整(选择1号GPU): docker run --rm -it --net=host --gpus \"device=1\" gpt-academic bash
4
 
5
  # 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
6
  FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
 
14
  RUN $useProxyNetwork curl cip.cc
15
  RUN sed -i '$ d' /etc/proxychains.conf
16
  RUN sed -i '$ d' /etc/proxychains.conf
17
+ # 在这里填写主机的代理协议(用于从github拉取代码)
18
  RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
19
  ARG useProxyNetwork=proxychains
20
  # # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
 
22
 
23
  # use python3 as the system default python
24
  RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
25
+ # 下载pytorch
26
+ RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
27
  # 下载分支
28
  WORKDIR /gpt
29
  RUN $useProxyNetwork git clone https://github.com/binary-husky/chatgpt_academic.git
30
  WORKDIR /gpt/chatgpt_academic
31
  RUN $useProxyNetwork python3 -m pip install -r requirements.txt
32
  RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
33
+ RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
34
 
35
  # 预热CHATGLM参数(非必要 可选步骤)
36
  RUN echo ' \n\
 
50
  # 可同时填写多个API-KEY,支持openai的key和api2d的key共存,用英文逗号分割,例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
51
  # LLM_MODEL 是选择初始的模型
52
  # LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备,可选 cpu 和 cuda
53
+ # [说明: 以下内容与`config.py`一一对应,请查阅config.py来完成一下配置的填写]
54
  RUN echo ' \n\
55
  API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
56
  USE_PROXY = True \n\
docs/test_markdown_format.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ sample = """
2
+ [1]: https://baike.baidu.com/item/%E8%B4%A8%E8%83%BD%E6%96%B9%E7%A8%8B/1884527 "质能方程(质能方程式)_百度百科"
3
+ [2]: https://www.zhihu.com/question/348249281 "如何理解质能方程 E=mc²? - 知乎"
4
+ [3]: https://zhuanlan.zhihu.com/p/32597385 "质能方程的推导与理解 - 知乎 - 知乎专栏"
5
+
6
+ 你好,这是必应。质能方程是描述质量与能量之间的当量关系的方程[^1^][1]。用tex格式,质能方程可以写成$$E=mc^2$$,其中$E$是能量,$m$是质量,$c$是光速[^2^][2] [^3^][3]。
7
+ """
8
+ import re
9
+
10
+ def preprocess_newbing_out(s):
11
+ pattern = r'\^(\d+)\^' # 匹配^数字^
12
+ pattern2 = r'\[(\d+)\]' # 匹配^数字^
13
+ sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
14
+ result = re.sub(pattern, sub, s) # 替换操作
15
+ if '[1]' in result:
16
+ result += '<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>' + "<br/>".join([re.sub(pattern2, sub, r) for r in result.split('\n') if r.startswith('[')]) + '</small>'
17
+ return result
18
+
19
+
20
+ def close_up_code_segment_during_stream(gpt_reply):
21
+ """
22
+ 在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
23
+
24
+ Args:
25
+ gpt_reply (str): GPT模型返回的回复字符串。
26
+
27
+ Returns:
28
+ str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。
29
+
30
+ """
31
+ if '```' not in gpt_reply:
32
+ return gpt_reply
33
+ if gpt_reply.endswith('```'):
34
+ return gpt_reply
35
+
36
+ # 排除了以上两个情况,我们
37
+ segments = gpt_reply.split('```')
38
+ n_mark = len(segments) - 1
39
+ if n_mark % 2 == 1:
40
+ # print('输出代码片段中!')
41
+ return gpt_reply+'\n```'
42
+ else:
43
+ return gpt_reply
44
+
45
+ import markdown
46
+ from latex2mathml.converter import convert as tex2mathml
47
+ from functools import wraps, lru_cache
48
+ def markdown_convertion(txt):
49
+ """
50
+ 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。
51
+ """
52
+ pre = '<div class="markdown-body">'
53
+ suf = '</div>'
54
+ if txt.startswith(pre) and txt.endswith(suf):
55
+ # print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
56
+ return txt # 已经被转化过,不需要再次转化
57
+
58
+ markdown_extension_configs = {
59
+ 'mdx_math': {
60
+ 'enable_dollar_delimiter': True,
61
+ 'use_gitlab_delimiters': False,
62
+ },
63
+ }
64
+ find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
65
+
66
+ def tex2mathml_catch_exception(content, *args, **kwargs):
67
+ try:
68
+ content = tex2mathml(content, *args, **kwargs)
69
+ except:
70
+ content = content
71
+ return content
72
+
73
+ def replace_math_no_render(match):
74
+ content = match.group(1)
75
+ if 'mode=display' in match.group(0):
76
+ content = content.replace('\n', '</br>')
77
+ return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
78
+ else:
79
+ return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
80
+
81
+ def replace_math_render(match):
82
+ content = match.group(1)
83
+ if 'mode=display' in match.group(0):
84
+ if '\\begin{aligned}' in content:
85
+ content = content.replace('\\begin{aligned}', '\\begin{array}')
86
+ content = content.replace('\\end{aligned}', '\\end{array}')
87
+ content = content.replace('&', ' ')
88
+ content = tex2mathml_catch_exception(content, display="block")
89
+ return content
90
+ else:
91
+ return tex2mathml_catch_exception(content)
92
+
93
+ def markdown_bug_hunt(content):
94
+ """
95
+ 解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
96
+ """
97
+ content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
98
+ content = content.replace('</script>\n</script>', '</script>')
99
+ return content
100
+
101
+
102
+ if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
103
+ # convert everything to html format
104
+ split = markdown.markdown(text='---')
105
+ convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
106
+ convert_stage_1 = markdown_bug_hunt(convert_stage_1)
107
+ # re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
108
+ # 1. convert to easy-to-copy tex (do not render math)
109
+ convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
110
+ # 2. convert to rendered equation
111
+ convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
112
+ # cat them together
113
+ return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
114
+ else:
115
+ return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf
116
+
117
+
118
+ sample = preprocess_newbing_out(sample)
119
+ sample = close_up_code_segment_during_stream(sample)
120
+ sample = markdown_convertion(sample)
121
+ with open('tmp.html', 'w', encoding='utf8') as f:
122
+ f.write("""
123
+
124
+ <head>
125
+ <title>My Website</title>
126
+ <link rel="stylesheet" type="text/css" href="style.css">
127
+ </head>
128
+
129
+ """)
130
+ f.write(sample)
main.py CHANGED
@@ -174,9 +174,6 @@ def main():
174
  yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
175
  click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
176
  click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
177
- # def expand_file_area(file_upload, area_file_up):
178
- # if len(file_upload)>0: return {area_file_up: gr.update(open=True)}
179
- # click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up])
180
  cancel_handles.append(click_handle)
181
  # 终止按钮的回调函数注册
182
  stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
@@ -190,7 +187,9 @@ def main():
190
  print(f"\t(暗色主题): http://localhost:{PORT}/?__dark-theme=true")
191
  def open():
192
  time.sleep(2) # 打开浏览器
193
- webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
 
 
194
  threading.Thread(target=open, name="open-browser", daemon=True).start()
195
  threading.Thread(target=auto_update, name="self-upgrade", daemon=True).start()
196
  threading.Thread(target=warm_up_modules, name="warm-up", daemon=True).start()
 
174
  yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
175
  click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
176
  click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
 
 
 
177
  cancel_handles.append(click_handle)
178
  # 终止按钮的回调函数注册
179
  stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
 
187
  print(f"\t(暗色主题): http://localhost:{PORT}/?__dark-theme=true")
188
  def open():
189
  time.sleep(2) # 打开浏览器
190
+ DARK_MODE, = get_conf('DARK_MODE')
191
+ if DARK_MODE: webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
192
+ else: webbrowser.open_new_tab(f"http://localhost:{PORT}")
193
  threading.Thread(target=open, name="open-browser", daemon=True).start()
194
  threading.Thread(target=auto_update, name="self-upgrade", daemon=True).start()
195
  threading.Thread(target=warm_up_modules, name="warm-up", daemon=True).start()
request_llm/bridge_all.py CHANGED
@@ -11,7 +11,7 @@
11
  import tiktoken
12
  from functools import lru_cache
13
  from concurrent.futures import ThreadPoolExecutor
14
- from toolbox import get_conf
15
 
16
  from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
17
  from .bridge_chatgpt import predict as chatgpt_ui
@@ -19,6 +19,9 @@ from .bridge_chatgpt import predict as chatgpt_ui
19
  from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
20
  from .bridge_chatglm import predict as chatglm_ui
21
 
 
 
 
22
  # from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
23
  # from .bridge_tgui import predict as tgui_ui
24
 
@@ -48,6 +51,7 @@ class LazyloadTiktoken(object):
48
  API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
49
  openai_endpoint = "https://api.openai.com/v1/chat/completions"
50
  api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
 
51
  # 兼容旧版的配置
52
  try:
53
  API_URL, = get_conf("API_URL")
@@ -59,6 +63,7 @@ except:
59
  # 新版配置
60
  if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
61
  if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
 
62
 
63
 
64
  # 获取tokenizer
@@ -116,7 +121,15 @@ model_info = {
116
  "tokenizer": tokenizer_gpt35,
117
  "token_cnt": get_token_num_gpt35,
118
  },
119
-
 
 
 
 
 
 
 
 
120
  }
121
 
122
 
@@ -128,10 +141,7 @@ def LLM_CATCH_EXCEPTION(f):
128
  try:
129
  return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
130
  except Exception as e:
131
- from toolbox import get_conf
132
- import traceback
133
- proxies, = get_conf('proxies')
134
- tb_str = '\n```\n' + traceback.format_exc() + '\n```\n'
135
  observe_window[0] = tb_str
136
  return tb_str
137
  return decorated
@@ -182,7 +192,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
182
 
183
  def mutex_manager(window_mutex, observe_window):
184
  while True:
185
- time.sleep(0.5)
186
  if not window_mutex[-1]: break
187
  # 看门狗(watchdog)
188
  for i in range(n_model):
 
11
  import tiktoken
12
  from functools import lru_cache
13
  from concurrent.futures import ThreadPoolExecutor
14
+ from toolbox import get_conf, trimmed_format_exc
15
 
16
  from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
17
  from .bridge_chatgpt import predict as chatgpt_ui
 
19
  from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
20
  from .bridge_chatglm import predict as chatglm_ui
21
 
22
+ from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
23
+ from .bridge_newbing import predict as newbing_ui
24
+
25
  # from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
26
  # from .bridge_tgui import predict as tgui_ui
27
 
 
51
  API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
52
  openai_endpoint = "https://api.openai.com/v1/chat/completions"
53
  api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
54
+ newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
55
  # 兼容旧版的配置
56
  try:
57
  API_URL, = get_conf("API_URL")
 
63
  # 新版配置
64
  if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
65
  if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
66
+ if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
67
 
68
 
69
  # 获取tokenizer
 
121
  "tokenizer": tokenizer_gpt35,
122
  "token_cnt": get_token_num_gpt35,
123
  },
124
+ # newbing
125
+ "newbing": {
126
+ "fn_with_ui": newbing_ui,
127
+ "fn_without_ui": newbing_noui,
128
+ "endpoint": newbing_endpoint,
129
+ "max_token": 4096,
130
+ "tokenizer": tokenizer_gpt35,
131
+ "token_cnt": get_token_num_gpt35,
132
+ },
133
  }
134
 
135
 
 
141
  try:
142
  return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
143
  except Exception as e:
144
+ tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
 
 
 
145
  observe_window[0] = tb_str
146
  return tb_str
147
  return decorated
 
192
 
193
  def mutex_manager(window_mutex, observe_window):
194
  while True:
195
+ time.sleep(0.25)
196
  if not window_mutex[-1]: break
197
  # 看门狗(watchdog)
198
  for i in range(n_model):
request_llm/bridge_chatglm.py CHANGED
@@ -1,6 +1,7 @@
1
 
2
  from transformers import AutoModel, AutoTokenizer
3
  import time
 
4
  import importlib
5
  from toolbox import update_ui, get_conf
6
  from multiprocessing import Process, Pipe
@@ -18,6 +19,7 @@ class GetGLMHandle(Process):
18
  self.success = True
19
  self.check_dependency()
20
  self.start()
 
21
 
22
  def check_dependency(self):
23
  try:
@@ -72,6 +74,7 @@ class GetGLMHandle(Process):
72
 
73
  def stream_chat(self, **kwargs):
74
  # 主进程执行
 
75
  self.parent.send(kwargs)
76
  while True:
77
  res = self.parent.recv()
@@ -79,7 +82,7 @@ class GetGLMHandle(Process):
79
  yield res
80
  else:
81
  break
82
- return
83
 
84
  global glm_handle
85
  glm_handle = None
@@ -145,10 +148,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
145
  history_feedin.append([history[2*i], history[2*i+1]] )
146
 
147
  # 开始接收chatglm的回复
 
148
  for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
149
  chatbot[-1] = (inputs, response)
150
  yield from update_ui(chatbot=chatbot, history=history)
151
 
152
  # 总结输出
 
 
153
  history.extend([inputs, response])
154
  yield from update_ui(chatbot=chatbot, history=history)
 
1
 
2
  from transformers import AutoModel, AutoTokenizer
3
  import time
4
+ import threading
5
  import importlib
6
  from toolbox import update_ui, get_conf
7
  from multiprocessing import Process, Pipe
 
19
  self.success = True
20
  self.check_dependency()
21
  self.start()
22
+ self.threadLock = threading.Lock()
23
 
24
  def check_dependency(self):
25
  try:
 
74
 
75
  def stream_chat(self, **kwargs):
76
  # 主进程执行
77
+ self.threadLock.acquire()
78
  self.parent.send(kwargs)
79
  while True:
80
  res = self.parent.recv()
 
82
  yield res
83
  else:
84
  break
85
+ self.threadLock.release()
86
 
87
  global glm_handle
88
  glm_handle = None
 
148
  history_feedin.append([history[2*i], history[2*i+1]] )
149
 
150
  # 开始接收chatglm的回复
151
+ response = "[Local Message]: 等待ChatGLM响应中 ..."
152
  for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
153
  chatbot[-1] = (inputs, response)
154
  yield from update_ui(chatbot=chatbot, history=history)
155
 
156
  # 总结输出
157
+ if response == "[Local Message]: 等待ChatGLM响应中 ...":
158
+ response = "[Local Message]: ChatGLM响应异常 ..."
159
  history.extend([inputs, response])
160
  yield from update_ui(chatbot=chatbot, history=history)
request_llm/bridge_chatgpt.py CHANGED
@@ -21,7 +21,7 @@ import importlib
21
 
22
  # config_private.py放自己的秘密如API和代理网址
23
  # 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
24
- from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
25
  proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
26
  get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
27
 
@@ -215,7 +215,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
215
  chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
216
  else:
217
  from toolbox import regular_txt_to_markdown
218
- tb_str = '```\n' + traceback.format_exc() + '```'
219
  chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
220
  yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
221
  return
 
21
 
22
  # config_private.py放自己的秘密如API和代理网址
23
  # 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
24
+ from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
25
  proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
26
  get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
27
 
 
215
  chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
216
  else:
217
  from toolbox import regular_txt_to_markdown
218
+ tb_str = '```\n' + trimmed_format_exc() + '```'
219
  chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
220
  yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
221
  return
request_llm/bridge_newbing.py ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ========================================================================
3
+ 第一部分:来自EdgeGPT.py
4
+ https://github.com/acheong08/EdgeGPT
5
+ ========================================================================
6
+ """
7
+ from .edge_gpt import NewbingChatbot
8
+ load_message = "等待NewBing响应。"
9
+
10
+ """
11
+ ========================================================================
12
+ 第二部分:子进程Worker(调用主体)
13
+ ========================================================================
14
+ """
15
+ import time
16
+ import json
17
+ import re
18
+ import logging
19
+ import asyncio
20
+ import importlib
21
+ import threading
22
+ from toolbox import update_ui, get_conf, trimmed_format_exc
23
+ from multiprocessing import Process, Pipe
24
+
25
+ def preprocess_newbing_out(s):
26
+ pattern = r'\^(\d+)\^' # 匹配^数字^
27
+ sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值
28
+ result = re.sub(pattern, sub, s) # 替换操作
29
+ if '[1]' in result:
30
+ result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
31
+ return result
32
+
33
+ def preprocess_newbing_out_simple(result):
34
+ if '[1]' in result:
35
+ result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
36
+ return result
37
+
38
+ class NewBingHandle(Process):
39
+ def __init__(self):
40
+ super().__init__(daemon=True)
41
+ self.parent, self.child = Pipe()
42
+ self.newbing_model = None
43
+ self.info = ""
44
+ self.success = True
45
+ self.local_history = []
46
+ self.check_dependency()
47
+ self.start()
48
+ self.threadLock = threading.Lock()
49
+
50
+ def check_dependency(self):
51
+ try:
52
+ self.success = False
53
+ import certifi, httpx, rich
54
+ self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
55
+ self.success = True
56
+ except:
57
+ self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
58
+ self.success = False
59
+
60
+ def ready(self):
61
+ return self.newbing_model is not None
62
+
63
+ async def async_run(self):
64
+ # 读取配置
65
+ NEWBING_STYLE, = get_conf('NEWBING_STYLE')
66
+ from request_llm.bridge_all import model_info
67
+ endpoint = model_info['newbing']['endpoint']
68
+ while True:
69
+ # 等待
70
+ kwargs = self.child.recv()
71
+ question=kwargs['query']
72
+ history=kwargs['history']
73
+ system_prompt=kwargs['system_prompt']
74
+
75
+ # 是否重置
76
+ if len(self.local_history) > 0 and len(history)==0:
77
+ await self.newbing_model.reset()
78
+ self.local_history = []
79
+
80
+ # 开始问问题
81
+ prompt = ""
82
+ if system_prompt not in self.local_history:
83
+ self.local_history.append(system_prompt)
84
+ prompt += system_prompt + '\n'
85
+
86
+ # 追加历史
87
+ for ab in history:
88
+ a, b = ab
89
+ if a not in self.local_history:
90
+ self.local_history.append(a)
91
+ prompt += a + '\n'
92
+ # if b not in self.local_history:
93
+ # self.local_history.append(b)
94
+ # prompt += b + '\n'
95
+
96
+ # 问题
97
+ prompt += question
98
+ self.local_history.append(question)
99
+ print('question:', prompt)
100
+ # 提交
101
+ async for final, response in self.newbing_model.ask_stream(
102
+ prompt=question,
103
+ conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
104
+ wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
105
+ ):
106
+ if not final:
107
+ print(response)
108
+ self.child.send(str(response))
109
+ else:
110
+ print('-------- receive final ---------')
111
+ self.child.send('[Finish]')
112
+ # self.local_history.append(response)
113
+
114
+
115
+ def run(self):
116
+ """
117
+ 这个函数运行在子进程
118
+ """
119
+ # 第一次运行,加载参数
120
+ self.success = False
121
+ self.local_history = []
122
+ if (self.newbing_model is None) or (not self.success):
123
+ # 代理设置
124
+ proxies, = get_conf('proxies')
125
+ if proxies is None:
126
+ self.proxies_https = None
127
+ else:
128
+ self.proxies_https = proxies['https']
129
+ # cookie
130
+ NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
131
+ try:
132
+ cookies = json.loads(NEWBING_COOKIES)
133
+ except:
134
+ self.success = False
135
+ tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
136
+ self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
137
+ self.child.send('[Fail]')
138
+ self.child.send('[Finish]')
139
+ raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
140
+
141
+ try:
142
+ self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
143
+ except:
144
+ self.success = False
145
+ tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
146
+ self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
147
+ self.child.send('[Fail]')
148
+ self.child.send('[Finish]')
149
+ raise RuntimeError(f"不能加载Newbing组件。")
150
+
151
+ self.success = True
152
+ try:
153
+ # 进入任务等待状态
154
+ asyncio.run(self.async_run())
155
+ except Exception:
156
+ tb_str = '```\n' + trimmed_format_exc() + '```'
157
+ self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
158
+ self.child.send('[Fail]')
159
+ self.child.send('[Finish]')
160
+
161
+ def stream_chat(self, **kwargs):
162
+ """
163
+ 这个函数运行在主进程
164
+ """
165
+ self.threadLock.acquire()
166
+ self.parent.send(kwargs) # 发送请求到子进程
167
+ while True:
168
+ res = self.parent.recv() # 等待newbing回复的片段
169
+ if res == '[Finish]':
170
+ break # 结束
171
+ elif res == '[Fail]':
172
+ self.success = False
173
+ break
174
+ else:
175
+ yield res # newbing回复的片段
176
+ self.threadLock.release()
177
+
178
+
179
+ """
180
+ ========================================================================
181
+ 第三部分:主进程统一调用函数接口
182
+ ========================================================================
183
+ """
184
+ global newbing_handle
185
+ newbing_handle = None
186
+
187
+ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
188
+ """
189
+ 多线程方法
190
+ 函数的说明请见 request_llm/bridge_all.py
191
+ """
192
+ global newbing_handle
193
+ if (newbing_handle is None) or (not newbing_handle.success):
194
+ newbing_handle = NewBingHandle()
195
+ observe_window[0] = load_message + "\n\n" + newbing_handle.info
196
+ if not newbing_handle.success:
197
+ error = newbing_handle.info
198
+ newbing_handle = None
199
+ raise RuntimeError(error)
200
+
201
+ # 没有 sys_prompt 接口,因此把prompt加入 history
202
+ history_feedin = []
203
+ for i in range(len(history)//2):
204
+ history_feedin.append([history[2*i], history[2*i+1]] )
205
+
206
+ watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
207
+ response = ""
208
+ observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
209
+ for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
210
+ observe_window[0] = preprocess_newbing_out_simple(response)
211
+ if len(observe_window) >= 2:
212
+ if (time.time()-observe_window[1]) > watch_dog_patience:
213
+ raise RuntimeError("程序终止。")
214
+ return preprocess_newbing_out_simple(response)
215
+
216
+ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
217
+ """
218
+ 单线程方法
219
+ 函数的说明请见 request_llm/bridge_all.py
220
+ """
221
+ chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
222
+
223
+ global newbing_handle
224
+ if (newbing_handle is None) or (not newbing_handle.success):
225
+ newbing_handle = NewBingHandle()
226
+ chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
227
+ yield from update_ui(chatbot=chatbot, history=[])
228
+ if not newbing_handle.success:
229
+ newbing_handle = None
230
+ return
231
+
232
+ if additional_fn is not None:
233
+ import core_functional
234
+ importlib.reload(core_functional) # 热更新prompt
235
+ core_functional = core_functional.get_core_functions()
236
+ if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
237
+ inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
238
+
239
+ history_feedin = []
240
+ for i in range(len(history)//2):
241
+ history_feedin.append([history[2*i], history[2*i+1]] )
242
+
243
+ chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
244
+ response = "[Local Message]: 等待NewBing响应中 ..."
245
+ yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完��后再提交新问题。")
246
+ for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
247
+ chatbot[-1] = (inputs, preprocess_newbing_out(response))
248
+ yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
249
+ if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..."
250
+ history.extend([inputs, response])
251
+ logging.info(f'[raw_input] {inputs}')
252
+ logging.info(f'[response] {response}')
253
+ yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
254
+
request_llm/edge_gpt.py ADDED
@@ -0,0 +1,409 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ========================================================================
3
+ 第一部分:来自EdgeGPT.py
4
+ https://github.com/acheong08/EdgeGPT
5
+ ========================================================================
6
+ """
7
+
8
+ import argparse
9
+ import asyncio
10
+ import json
11
+ import os
12
+ import random
13
+ import re
14
+ import ssl
15
+ import sys
16
+ import uuid
17
+ from enum import Enum
18
+ from typing import Generator
19
+ from typing import Literal
20
+ from typing import Optional
21
+ from typing import Union
22
+ import websockets.client as websockets
23
+
24
+ DELIMITER = "\x1e"
25
+
26
+
27
+ # Generate random IP between range 13.104.0.0/14
28
+ FORWARDED_IP = (
29
+ f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
30
+ )
31
+
32
+ HEADERS = {
33
+ "accept": "application/json",
34
+ "accept-language": "en-US,en;q=0.9",
35
+ "content-type": "application/json",
36
+ "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
37
+ "sec-ch-ua-arch": '"x86"',
38
+ "sec-ch-ua-bitness": '"64"',
39
+ "sec-ch-ua-full-version": '"109.0.1518.78"',
40
+ "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
41
+ "sec-ch-ua-mobile": "?0",
42
+ "sec-ch-ua-model": "",
43
+ "sec-ch-ua-platform": '"Windows"',
44
+ "sec-ch-ua-platform-version": '"15.0.0"',
45
+ "sec-fetch-dest": "empty",
46
+ "sec-fetch-mode": "cors",
47
+ "sec-fetch-site": "same-origin",
48
+ "x-ms-client-request-id": str(uuid.uuid4()),
49
+ "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
50
+ "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
51
+ "Referrer-Policy": "origin-when-cross-origin",
52
+ "x-forwarded-for": FORWARDED_IP,
53
+ }
54
+
55
+ HEADERS_INIT_CONVER = {
56
+ "authority": "edgeservices.bing.com",
57
+ "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
58
+ "accept-language": "en-US,en;q=0.9",
59
+ "cache-control": "max-age=0",
60
+ "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
61
+ "sec-ch-ua-arch": '"x86"',
62
+ "sec-ch-ua-bitness": '"64"',
63
+ "sec-ch-ua-full-version": '"110.0.1587.69"',
64
+ "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
65
+ "sec-ch-ua-mobile": "?0",
66
+ "sec-ch-ua-model": '""',
67
+ "sec-ch-ua-platform": '"Windows"',
68
+ "sec-ch-ua-platform-version": '"15.0.0"',
69
+ "sec-fetch-dest": "document",
70
+ "sec-fetch-mode": "navigate",
71
+ "sec-fetch-site": "none",
72
+ "sec-fetch-user": "?1",
73
+ "upgrade-insecure-requests": "1",
74
+ "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
75
+ "x-edge-shopping-flag": "1",
76
+ "x-forwarded-for": FORWARDED_IP,
77
+ }
78
+
79
+ def get_ssl_context():
80
+ import certifi
81
+ ssl_context = ssl.create_default_context()
82
+ ssl_context.load_verify_locations(certifi.where())
83
+ return ssl_context
84
+
85
+
86
+
87
+ class NotAllowedToAccess(Exception):
88
+ pass
89
+
90
+
91
+ class ConversationStyle(Enum):
92
+ creative = "h3imaginative,clgalileo,gencontentv3"
93
+ balanced = "galileo"
94
+ precise = "h3precise,clgalileo"
95
+
96
+
97
+ CONVERSATION_STYLE_TYPE = Optional[
98
+ Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
99
+ ]
100
+
101
+
102
+ def _append_identifier(msg: dict) -> str:
103
+ """
104
+ Appends special character to end of message to identify end of message
105
+ """
106
+ # Convert dict to json string
107
+ return json.dumps(msg) + DELIMITER
108
+
109
+
110
+ def _get_ran_hex(length: int = 32) -> str:
111
+ """
112
+ Returns random hex string
113
+ """
114
+ return "".join(random.choice("0123456789abcdef") for _ in range(length))
115
+
116
+
117
+ class _ChatHubRequest:
118
+ """
119
+ Request object for ChatHub
120
+ """
121
+
122
+ def __init__(
123
+ self,
124
+ conversation_signature: str,
125
+ client_id: str,
126
+ conversation_id: str,
127
+ invocation_id: int = 0,
128
+ ) -> None:
129
+ self.struct: dict = {}
130
+
131
+ self.client_id: str = client_id
132
+ self.conversation_id: str = conversation_id
133
+ self.conversation_signature: str = conversation_signature
134
+ self.invocation_id: int = invocation_id
135
+
136
+ def update(
137
+ self,
138
+ prompt,
139
+ conversation_style,
140
+ options,
141
+ ) -> None:
142
+ """
143
+ Updates request object
144
+ """
145
+ if options is None:
146
+ options = [
147
+ "deepleo",
148
+ "enable_debug_commands",
149
+ "disable_emoji_spoken_text",
150
+ "enablemm",
151
+ ]
152
+ if conversation_style:
153
+ if not isinstance(conversation_style, ConversationStyle):
154
+ conversation_style = getattr(ConversationStyle, conversation_style)
155
+ options = [
156
+ "nlu_direct_response_filter",
157
+ "deepleo",
158
+ "disable_emoji_spoken_text",
159
+ "responsible_ai_policy_235",
160
+ "enablemm",
161
+ conversation_style.value,
162
+ "dtappid",
163
+ "cricinfo",
164
+ "cricinfov2",
165
+ "dv3sugg",
166
+ ]
167
+ self.struct = {
168
+ "arguments": [
169
+ {
170
+ "source": "cib",
171
+ "optionsSets": options,
172
+ "sliceIds": [
173
+ "222dtappid",
174
+ "225cricinfo",
175
+ "224locals0",
176
+ ],
177
+ "traceId": _get_ran_hex(32),
178
+ "isStartOfSession": self.invocation_id == 0,
179
+ "message": {
180
+ "author": "user",
181
+ "inputMethod": "Keyboard",
182
+ "text": prompt,
183
+ "messageType": "Chat",
184
+ },
185
+ "conversationSignature": self.conversation_signature,
186
+ "participant": {
187
+ "id": self.client_id,
188
+ },
189
+ "conversationId": self.conversation_id,
190
+ },
191
+ ],
192
+ "invocationId": str(self.invocation_id),
193
+ "target": "chat",
194
+ "type": 4,
195
+ }
196
+ self.invocation_id += 1
197
+
198
+
199
+ class _Conversation:
200
+ """
201
+ Conversation API
202
+ """
203
+
204
+ def __init__(
205
+ self,
206
+ cookies,
207
+ proxy,
208
+ ) -> None:
209
+ self.struct: dict = {
210
+ "conversationId": None,
211
+ "clientId": None,
212
+ "conversationSignature": None,
213
+ "result": {"value": "Success", "message": None},
214
+ }
215
+ import httpx
216
+ self.proxy = proxy
217
+ proxy = (
218
+ proxy
219
+ or os.environ.get("all_proxy")
220
+ or os.environ.get("ALL_PROXY")
221
+ or os.environ.get("https_proxy")
222
+ or os.environ.get("HTTPS_PROXY")
223
+ or None
224
+ )
225
+ if proxy is not None and proxy.startswith("socks5h://"):
226
+ proxy = "socks5://" + proxy[len("socks5h://") :]
227
+ self.session = httpx.Client(
228
+ proxies=proxy,
229
+ timeout=30,
230
+ headers=HEADERS_INIT_CONVER,
231
+ )
232
+ for cookie in cookies:
233
+ self.session.cookies.set(cookie["name"], cookie["value"])
234
+
235
+ # Send GET request
236
+ response = self.session.get(
237
+ url=os.environ.get("BING_PROXY_URL")
238
+ or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
239
+ )
240
+ if response.status_code != 200:
241
+ response = self.session.get(
242
+ "https://edge.churchless.tech/edgesvc/turing/conversation/create",
243
+ )
244
+ if response.status_code != 200:
245
+ print(f"Status code: {response.status_code}")
246
+ print(response.text)
247
+ print(response.url)
248
+ raise Exception("Authentication failed")
249
+ try:
250
+ self.struct = response.json()
251
+ except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
252
+ raise Exception(
253
+ "Authentication failed. You have not been accepted into the beta.",
254
+ ) from exc
255
+ if self.struct["result"]["value"] == "UnauthorizedRequest":
256
+ raise NotAllowedToAccess(self.struct["result"]["message"])
257
+
258
+
259
+ class _ChatHub:
260
+ """
261
+ Chat API
262
+ """
263
+
264
+ def __init__(self, conversation) -> None:
265
+ self.wss = None
266
+ self.request: _ChatHubRequest
267
+ self.loop: bool
268
+ self.task: asyncio.Task
269
+ print(conversation.struct)
270
+ self.request = _ChatHubRequest(
271
+ conversation_signature=conversation.struct["conversationSignature"],
272
+ client_id=conversation.struct["clientId"],
273
+ conversation_id=conversation.struct["conversationId"],
274
+ )
275
+
276
+ async def ask_stream(
277
+ self,
278
+ prompt: str,
279
+ wss_link: str,
280
+ conversation_style: CONVERSATION_STYLE_TYPE = None,
281
+ raw: bool = False,
282
+ options: dict = None,
283
+ ) -> Generator[str, None, None]:
284
+ """
285
+ Ask a question to the bot
286
+ """
287
+ if self.wss and not self.wss.closed:
288
+ await self.wss.close()
289
+ # Check if websocket is closed
290
+ self.wss = await websockets.connect(
291
+ wss_link,
292
+ extra_headers=HEADERS,
293
+ max_size=None,
294
+ ssl=get_ssl_context()
295
+ )
296
+ await self._initial_handshake()
297
+ # Construct a ChatHub request
298
+ self.request.update(
299
+ prompt=prompt,
300
+ conversation_style=conversation_style,
301
+ options=options,
302
+ )
303
+ # Send request
304
+ await self.wss.send(_append_identifier(self.request.struct))
305
+ final = False
306
+ while not final:
307
+ objects = str(await self.wss.recv()).split(DELIMITER)
308
+ for obj in objects:
309
+ if obj is None or not obj:
310
+ continue
311
+ response = json.loads(obj)
312
+ if response.get("type") != 2 and raw:
313
+ yield False, response
314
+ elif response.get("type") == 1 and response["arguments"][0].get(
315
+ "messages",
316
+ ):
317
+ resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][
318
+ 0
319
+ ]["body"][0].get("text")
320
+ yield False, resp_txt
321
+ elif response.get("type") == 2:
322
+ final = True
323
+ yield True, response
324
+
325
+ async def _initial_handshake(self) -> None:
326
+ await self.wss.send(_append_identifier({"protocol": "json", "version": 1}))
327
+ await self.wss.recv()
328
+
329
+ async def close(self) -> None:
330
+ """
331
+ Close the connection
332
+ """
333
+ if self.wss and not self.wss.closed:
334
+ await self.wss.close()
335
+
336
+
337
+ class NewbingChatbot:
338
+ """
339
+ Combines everything to make it seamless
340
+ """
341
+
342
+ def __init__(
343
+ self,
344
+ cookies,
345
+ proxy
346
+ ) -> None:
347
+ if cookies is None:
348
+ cookies = {}
349
+ self.cookies = cookies
350
+ self.proxy = proxy
351
+ self.chat_hub: _ChatHub = _ChatHub(
352
+ _Conversation(self.cookies, self.proxy),
353
+ )
354
+
355
+ async def ask(
356
+ self,
357
+ prompt: str,
358
+ wss_link: str,
359
+ conversation_style: CONVERSATION_STYLE_TYPE = None,
360
+ options: dict = None,
361
+ ) -> dict:
362
+ """
363
+ Ask a question to the bot
364
+ """
365
+ async for final, response in self.chat_hub.ask_stream(
366
+ prompt=prompt,
367
+ conversation_style=conversation_style,
368
+ wss_link=wss_link,
369
+ options=options,
370
+ ):
371
+ if final:
372
+ return response
373
+ await self.chat_hub.wss.close()
374
+ return None
375
+
376
+ async def ask_stream(
377
+ self,
378
+ prompt: str,
379
+ wss_link: str,
380
+ conversation_style: CONVERSATION_STYLE_TYPE = None,
381
+ raw: bool = False,
382
+ options: dict = None,
383
+ ) -> Generator[str, None, None]:
384
+ """
385
+ Ask a question to the bot
386
+ """
387
+ async for response in self.chat_hub.ask_stream(
388
+ prompt=prompt,
389
+ conversation_style=conversation_style,
390
+ wss_link=wss_link,
391
+ raw=raw,
392
+ options=options,
393
+ ):
394
+ yield response
395
+
396
+ async def close(self) -> None:
397
+ """
398
+ Close the connection
399
+ """
400
+ await self.chat_hub.close()
401
+
402
+ async def reset(self) -> None:
403
+ """
404
+ Reset the conversation
405
+ """
406
+ await self.close()
407
+ self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy))
408
+
409
+
request_llm/requirements_newbing.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ BingImageCreator
2
+ certifi
3
+ httpx
4
+ prompt_toolkit
5
+ requests
6
+ rich
7
+ websockets
8
+ httpx[socks]
toolbox.py CHANGED
@@ -5,7 +5,20 @@ import inspect
5
  import re
6
  from latex2mathml.converter import convert as tex2mathml
7
  from functools import wraps, lru_cache
8
- ############################### 插件输入输出接驳区 #######################################
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  class ChatBotWithCookies(list):
10
  def __init__(self, cookie):
11
  self._cookies = cookie
@@ -20,6 +33,7 @@ class ChatBotWithCookies(list):
20
  def get_cookies(self):
21
  return self._cookies
22
 
 
23
  def ArgsGeneralWrapper(f):
24
  """
25
  装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
@@ -47,6 +61,7 @@ def ArgsGeneralWrapper(f):
47
  yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
48
  return decorated
49
 
 
50
  def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
51
  """
52
  刷新用户界面
@@ -54,10 +69,18 @@ def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
54
  assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
55
  yield chatbot.get_cookies(), chatbot, history, msg
56
 
 
 
 
 
 
 
 
57
  def CatchException(f):
58
  """
59
  装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
60
  """
 
61
  @wraps(f)
62
  def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
63
  try:
@@ -66,9 +89,10 @@ def CatchException(f):
66
  from check_proxy import check_proxy
67
  from toolbox import get_conf
68
  proxies, = get_conf('proxies')
69
- tb_str = '```\n' + traceback.format_exc() + '```'
70
- if chatbot is None or len(chatbot) == 0:
71
- chatbot = [["插件调度异常", "异常原因"]]
 
72
  chatbot[-1] = (chatbot[-1][0],
73
  f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
74
  yield from update_ui(chatbot=chatbot, history=history, msg=f'异常 {e}') # 刷新界面
@@ -93,7 +117,23 @@ def HotReload(f):
93
  return decorated
94
 
95
 
96
- ####################################### 其他小工具 #####################################
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
  def get_reduce_token_percent(text):
99
  """
@@ -113,7 +153,6 @@ def get_reduce_token_percent(text):
113
  return 0.5, '不详'
114
 
115
 
116
-
117
  def write_results_to_file(history, file_name=None):
118
  """
119
  将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
@@ -178,13 +217,17 @@ def text_divide_paragraph(text):
178
  text = "</br>".join(lines)
179
  return text
180
 
181
-
182
  def markdown_convertion(txt):
183
  """
184
  将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式��换为HTML格式。
185
  """
186
  pre = '<div class="markdown-body">'
187
  suf = '</div>'
 
 
 
 
188
  markdown_extension_configs = {
189
  'mdx_math': {
190
  'enable_dollar_delimiter': True,
@@ -228,8 +271,14 @@ def markdown_convertion(txt):
228
  content = content.replace('</script>\n</script>', '</script>')
229
  return content
230
 
 
 
 
 
 
 
231
 
232
- if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
233
  # convert everything to html format
234
  split = markdown.markdown(text='---')
235
  convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
@@ -369,6 +418,9 @@ def find_recent_files(directory):
369
 
370
 
371
  def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
 
 
 
372
  if len(files) == 0:
373
  return chatbot, txt
374
  import shutil
@@ -388,8 +440,7 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
388
  shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
389
  err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
390
  dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
391
- moved_files = [fp for fp in glob.glob(
392
- 'private_upload/**/*', recursive=True)]
393
  if "底部输入区" in checkboxes:
394
  txt = ""
395
  txt2 = f'private_upload/{time_tag}'
@@ -414,8 +465,9 @@ def on_report_generated(files, chatbot):
414
  return report_files, chatbot
415
 
416
  def is_openai_api_key(key):
417
- API_MATCH = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
418
- return bool(API_MATCH)
 
419
 
420
  def is_api2d_key(key):
421
  if key.startswith('fk') and len(key) == 41:
@@ -508,7 +560,7 @@ def clear_line_break(txt):
508
  class DummyWith():
509
  """
510
  这段代码定义了一个名为DummyWith的空上下文管理器,
511
- 它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。
512
  上下文管理器是一种Python对象,用于与with语句一起使用,
513
  以确保一些资源在代码块执行期间得到正确的初始化和清理。
514
  上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
@@ -522,6 +574,9 @@ class DummyWith():
522
  return
523
 
524
  def run_gradio_in_subpath(demo, auth, port, custom_path):
 
 
 
525
  def is_path_legal(path: str)->bool:
526
  '''
527
  check path for sub url
 
5
  import re
6
  from latex2mathml.converter import convert as tex2mathml
7
  from functools import wraps, lru_cache
8
+
9
+ """
10
+ ========================================================================
11
+ 第一部分
12
+ 函数插件输入输出接驳区
13
+ - ChatBotWithCookies: 带Cookies的Chatbot类,为实现更多强大的功能做基础
14
+ - ArgsGeneralWrapper: 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构
15
+ - update_ui: 刷新界面用 yield from update_ui(chatbot, history)
16
+ - CatchException: 将插件中出的所有问题显示在界面上
17
+ - HotReload: 实现插件的热更新
18
+ - trimmed_format_exc: 打印traceback,为了安全而隐藏绝对地址
19
+ ========================================================================
20
+ """
21
+
22
  class ChatBotWithCookies(list):
23
  def __init__(self, cookie):
24
  self._cookies = cookie
 
33
  def get_cookies(self):
34
  return self._cookies
35
 
36
+
37
  def ArgsGeneralWrapper(f):
38
  """
39
  装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
 
61
  yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
62
  return decorated
63
 
64
+
65
  def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
66
  """
67
  刷新用户界面
 
69
  assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
70
  yield chatbot.get_cookies(), chatbot, history, msg
71
 
72
+ def trimmed_format_exc():
73
+ import os, traceback
74
+ str = traceback.format_exc()
75
+ current_path = os.getcwd()
76
+ replace_path = "."
77
+ return str.replace(current_path, replace_path)
78
+
79
  def CatchException(f):
80
  """
81
  装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
82
  """
83
+
84
  @wraps(f)
85
  def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
86
  try:
 
89
  from check_proxy import check_proxy
90
  from toolbox import get_conf
91
  proxies, = get_conf('proxies')
92
+ tb_str = '```\n' + trimmed_format_exc() + '```'
93
+ if len(chatbot) == 0:
94
+ chatbot.clear()
95
+ chatbot.append(["插件调度异常", "异常原因"])
96
  chatbot[-1] = (chatbot[-1][0],
97
  f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
98
  yield from update_ui(chatbot=chatbot, history=history, msg=f'异常 {e}') # 刷新界面
 
117
  return decorated
118
 
119
 
120
+ """
121
+ ========================================================================
122
+ 第二部分
123
+ 其他小工具:
124
+ - write_results_to_file: 将结果写入markdown文件中
125
+ - regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。
126
+ - report_execption: 向chatbot中添加简单的意外错误信息
127
+ - text_divide_paragraph: 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
128
+ - markdown_convertion: 用多种方式组合,将markdown转化为好看的html
129
+ - format_io: 接管gradio默认的markdown处理方式
130
+ - on_file_uploaded: 处理文件的上传(自动解压)
131
+ - on_report_generated: 将生成的报告自动投射到文件上传区
132
+ - clip_history: 当历史上下文过长时,自动截断
133
+ - get_conf: 获取设置
134
+ - select_api_key: 根据当前的模型类别,抽取可用的api-key
135
+ ========================================================================
136
+ """
137
 
138
  def get_reduce_token_percent(text):
139
  """
 
153
  return 0.5, '不详'
154
 
155
 
 
156
  def write_results_to_file(history, file_name=None):
157
  """
158
  将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
 
217
  text = "</br>".join(lines)
218
  return text
219
 
220
+ @lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
221
  def markdown_convertion(txt):
222
  """
223
  将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式��换为HTML格式。
224
  """
225
  pre = '<div class="markdown-body">'
226
  suf = '</div>'
227
+ if txt.startswith(pre) and txt.endswith(suf):
228
+ # print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
229
+ return txt # 已经被转化过,不需要再次转化
230
+
231
  markdown_extension_configs = {
232
  'mdx_math': {
233
  'enable_dollar_delimiter': True,
 
271
  content = content.replace('</script>\n</script>', '</script>')
272
  return content
273
 
274
+ def no_code(txt):
275
+ if '```' not in txt:
276
+ return True
277
+ else:
278
+ if '```reference' in txt: return True # newbing
279
+ else: return False
280
 
281
+ if ('$' in txt) and no_code(txt): # 有$标识的公式符号,且没有代码段```的标识
282
  # convert everything to html format
283
  split = markdown.markdown(text='---')
284
  convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
 
418
 
419
 
420
  def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
421
+ """
422
+ 当文件被上传时的回调函数
423
+ """
424
  if len(files) == 0:
425
  return chatbot, txt
426
  import shutil
 
440
  shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
441
  err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
442
  dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
443
+ moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
 
444
  if "底部输入区" in checkboxes:
445
  txt = ""
446
  txt2 = f'private_upload/{time_tag}'
 
465
  return report_files, chatbot
466
 
467
  def is_openai_api_key(key):
468
+ API_MATCH_ORIGINAL = re.match(r"sk-[a-zA-Z0-9]{48}$", key)
469
+ API_MATCH_AZURE = re.match(r"[a-zA-Z0-9]{32}$", key)
470
+ return bool(API_MATCH_ORIGINAL) or bool(API_MATCH_AZURE)
471
 
472
  def is_api2d_key(key):
473
  if key.startswith('fk') and len(key) == 41:
 
560
  class DummyWith():
561
  """
562
  这段代码定义了一个名为DummyWith的空上下文管理器,
563
+ 它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。
564
  上下文管理器是一种Python对象,用于与with语句一起使用,
565
  以确保一些资源在代码块执行期间得到正确的初始化和清理。
566
  上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
 
574
  return
575
 
576
  def run_gradio_in_subpath(demo, auth, port, custom_path):
577
+ """
578
+ 把gradio的运行地址更改到指定的二次路径上
579
+ """
580
  def is_path_legal(path: str)->bool:
581
  '''
582
  check path for sub url
version CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "version": 3.2,
3
  "show_feature": true,
4
- "new_feature": "保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
5
  }
 
1
  {
2
+ "version": 3.32,
3
  "show_feature": true,
4
+ "new_feature": "完善对话历史的保存/载入/删除 <-> 我们发现了自动更新模块的BUG,此次更新可能需要您手动到Github下载新版程序并覆盖 <-> ChatGLM加线程锁提高并发稳定性 <-> 支持NewBing <-> Markdown翻译功能支持直接输入Readme文件网址 <-> 保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM"
5
  }