zRzRzRzRzRzRzR commited on
Commit
0aa722c
1 Parent(s): d5275af

support transformers 4.44

Browse files
Files changed (5) hide show
  1. README.md +2 -0
  2. README_en.md +2 -0
  3. config.json +1 -1
  4. generation_config.json +1 -1
  5. modeling_chatglm.py +1 -4
README.md CHANGED
@@ -17,6 +17,8 @@ inference: false
17
 
18
  Read this in [English](README_en.md).
19
 
 
 
20
  **2024/07/24,我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) 查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告**
21
 
22
  ## 模型介绍
 
17
 
18
  Read this in [English](README_en.md).
19
 
20
+ **2024/08/12, 本仓库代码已更新并使用 `transforemrs>=4.44.0`, 请及时更新依赖。**
21
+
22
  **2024/07/24,我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) 查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告**
23
 
24
  ## 模型介绍
README_en.md CHANGED
@@ -1,5 +1,7 @@
1
  # GLM-4-9B-Chat-1M
2
 
 
 
3
  **On July 24, 2024, we released the latest technical interpretation related to long texts. Check
4
  out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view our
5
  technical report on long context technology in the training of the open-source GLM-4-9B model.**
 
1
  # GLM-4-9B-Chat-1M
2
 
3
+ **2024/08/12, The repository code has been updated and now requires `transformers>=4.44.0`. Please update your dependencies accordingly.**
4
+
5
  **On July 24, 2024, we released the latest technical interpretation related to long texts. Check
6
  out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view our
7
  technical report on long context technology in the training of the open-source GLM-4-9B model.**
config.json CHANGED
@@ -38,7 +38,7 @@
38
  "seq_length": 1048576,
39
  "use_cache": true,
40
  "torch_dtype": "bfloat16",
41
- "transformers_version": "4.42.4",
42
  "tie_word_embeddings": false,
43
  "eos_token_id": [151329, 151336, 151338],
44
  "pad_token_id": 151329
 
38
  "seq_length": 1048576,
39
  "use_cache": true,
40
  "torch_dtype": "bfloat16",
41
+ "transformers_version": "4.44.0",
42
  "tie_word_embeddings": false,
43
  "eos_token_id": [151329, 151336, 151338],
44
  "pad_token_id": 151329
generation_config.json CHANGED
@@ -9,5 +9,5 @@
9
  "temperature": 0.8,
10
  "max_length": 1024000,
11
  "top_p": 0.8,
12
- "transformers_version": "4.42.4"
13
  }
 
9
  "temperature": 0.8,
10
  "max_length": 1024000,
11
  "top_p": 0.8,
12
+ "transformers_version": "4.44.0"
13
  }
modeling_chatglm.py CHANGED
@@ -930,12 +930,9 @@ class ChatGLMForConditionalGeneration(ChatGLMPreTrainedModel):
930
  outputs: ModelOutput,
931
  model_kwargs: Dict[str, Any],
932
  is_encoder_decoder: bool = False,
933
- standardize_cache_format: bool = False,
934
  ) -> Dict[str, Any]:
935
  # update past_key_values
936
- cache_name, cache = self._extract_past_from_model_output(
937
- outputs, standardize_cache_format=standardize_cache_format
938
- )
939
  model_kwargs[cache_name] = cache
940
 
941
  # update attention mask
 
930
  outputs: ModelOutput,
931
  model_kwargs: Dict[str, Any],
932
  is_encoder_decoder: bool = False,
 
933
  ) -> Dict[str, Any]:
934
  # update past_key_values
935
+ cache_name, cache = self._extract_past_from_model_output(outputs)
 
 
936
  model_kwargs[cache_name] = cache
937
 
938
  # update attention mask