Spaces:
				
			
			
	
			
			
		Sleeping
		
	
	
	
			
			
	
	
	
	
		
		
		Sleeping
		
	
		binary-husky
		
	commited on
		
		
					Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -103,24 +103,32 @@ cd chatgpt_academic 
     | 
|
| 103 | 
         
             
            python -m pip install -r requirements.txt
         
     | 
| 104 | 
         | 
| 105 | 
         
             
            # (选择II: 如不熟悉python)使用anaconda,步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
         
     | 
| 106 | 
         
            -
            conda create -n gptac_venv python=3.11
         
     | 
| 107 | 
         
            -
            conda activate gptac_venv
         
     | 
| 108 | 
         
            -
            python -m pip install -r requirements.txt
         
     | 
| 109 | 
         
             
            ```
         
     | 
| 110 | 
         | 
| 111 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 112 | 
         
             
            ```sh
         
     | 
| 113 | 
         
            -
            #  
     | 
| 114 | 
         
             
            python -m pip install -r request_llm/requirements_chatglm.txt  
         
     | 
| 115 | 
         | 
| 116 | 
         
            -
            #  
     | 
| 117 | 
         
             
            python -m pip install -r request_llm/requirements_moss.txt
         
     | 
| 118 | 
         
             
            git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss  # 注意执行此行代码时,必须处于项目根路径
         
     | 
| 119 | 
         | 
| 120 | 
         
            -
            #  
     | 
| 121 | 
         
             
            AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
         
     | 
| 122 | 
         
             
            ```
         
     | 
| 123 | 
         | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 124 | 
         
             
            4. 运行
         
     | 
| 125 | 
         
             
            ```sh
         
     | 
| 126 | 
         
             
            python main.py
         
     | 
| 
         | 
|
| 103 | 
         
             
            python -m pip install -r requirements.txt
         
     | 
| 104 | 
         | 
| 105 | 
         
             
            # (选择II: 如不熟悉python)使用anaconda,步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
         
     | 
| 106 | 
         
            +
            conda create -n gptac_venv python=3.11    # 创建anaconda环境
         
     | 
| 107 | 
         
            +
            conda activate gptac_venv                 # 激活anaconda环境
         
     | 
| 108 | 
         
            +
            python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
         
     | 
| 109 | 
         
             
            ```
         
     | 
| 110 | 
         | 
| 111 | 
         
            +
            <details><summary>如果需要支持清华ChatGLM/复旦MOSS作为后端,请点击展开此处</summary>
         
     | 
| 112 | 
         
            +
            <p>
         
     | 
| 113 | 
         
            +
             
     | 
| 114 | 
         
            +
            【可选步骤】如果需要支持清华ChatGLM/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
         
     | 
| 115 | 
         
             
            ```sh
         
     | 
| 116 | 
         
            +
            # 【可选步骤I】支持清华ChatGLM。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
         
     | 
| 117 | 
         
             
            python -m pip install -r request_llm/requirements_chatglm.txt  
         
     | 
| 118 | 
         | 
| 119 | 
         
            +
            # 【可选步骤II】支持复旦MOSS
         
     | 
| 120 | 
         
             
            python -m pip install -r request_llm/requirements_moss.txt
         
     | 
| 121 | 
         
             
            git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss  # 注意执行此行代码时,必须处于项目根路径
         
     | 
| 122 | 
         | 
| 123 | 
         
            +
            # 【可选步骤III】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
         
     | 
| 124 | 
         
             
            AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
         
     | 
| 125 | 
         
             
            ```
         
     | 
| 126 | 
         | 
| 127 | 
         
            +
            </p>
         
     | 
| 128 | 
         
            +
            </details>
         
     | 
| 129 | 
         
            +
             
     | 
| 130 | 
         
            +
             
     | 
| 131 | 
         
            +
             
     | 
| 132 | 
         
             
            4. 运行
         
     | 
| 133 | 
         
             
            ```sh
         
     | 
| 134 | 
         
             
            python main.py
         
     |