btaskel commited on
Commit
b67390b
·
verified ·
1 Parent(s): 36d9152

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ https://huggingface.co/Seikaijyu/RWKV6-3B-v2.1-Aphrodite-yandere-chat
12
 
13
  Based on my experience, Q4_K_S and Q4_K_M are usually the balance points between model size, quantization, and speed.
14
 
15
- In some benchmarks, selecting a large-parameter low-quantization LLM tends to perform better than a small-parameter high-quantization LLM.
16
 
17
  根据我的经验,通常Q4_K_S、Q4_K_M是模型尺寸/量化/速度的平衡点
18
 
 
12
 
13
  Based on my experience, Q4_K_S and Q4_K_M are usually the balance points between model size, quantization, and speed.
14
 
15
+ In some benchmarks, selecting a large-parameter high-quantization LLM tends to perform better than a small-parameter low-quantization LLM.
16
 
17
  根据我的经验,通常Q4_K_S、Q4_K_M是模型尺寸/量化/速度的平衡点
18