Commit
·
1c6cd68
1
Parent(s):
86c1644
Update README.md
Browse files
README.md
CHANGED
@@ -68,7 +68,7 @@ Being a Yi model, try disabling the BOS token and/or running a lower temperature
|
|
68 |
|
69 |
24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2. I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
|
70 |
|
71 |
-
I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw! I published my own quantizations on vicuuna chat + fiction writing here: [4bpw
|
72 |
|
73 |
To load this in full-context backends like transformers and vllm, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM!
|
74 |
***
|
|
|
68 |
|
69 |
24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2. I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/)
|
70 |
|
71 |
+
I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw! I published my own quantizations on vicuuna chat + fiction writing here: [4bpw](https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-34B-200K-exl2-4bpw-fiction) [3.1bpw](https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-34B-200K-exl2-4bpw-fiction)
|
72 |
|
73 |
To load this in full-context backends like transformers and vllm, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM!
|
74 |
***
|