roleplaiapp commited on
Commit
70af15a
·
verified ·
1 Parent(s): 965f729

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - 32b
6
+ - 4-bit
7
+ - Q4_K_M
8
+ - cyberagent
9
+ - deepseek
10
+ - distill
11
+ - gguf
12
+ - japanese
13
+ - llama-cpp
14
+ - qwen
15
+ - text-generation
16
+ ---
17
+
18
+ # roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_M-GGUF
19
+
20
+ **Repo:** `roleplaiapp/cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf-Q4_K_M-GGUF`
21
+ **Original Model:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf`
22
+ **Quantized File:** `cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-Q4_K_M.gguf`
23
+ **Quantization:** `GGUF`
24
+ **Quantization Method:** `Q4_K_M`
25
+
26
+ ## Overview
27
+ This is a GGUF Q4_K_M quantized version of cyberagent-DeepSeek-R1-Distill-Qwen-32B-Japanese-gguf
28
+ ## Quantization By
29
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
30
+ I hope the community finds these quantizations useful.
31
+
32
+ Andrew Webby @ [RolePlai](https://roleplai.app/).