roleplaiapp commited on
Commit
2a130f4
·
verified ·
1 Parent(s): 67929c3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ library_name: transformers
4
+ base_model: internlm/internlm3-8b-instruct
5
+ tags:
6
+ - llama-cpp
7
+ - internlm3-8b-instruct
8
+ - gguf
9
+ - Q3_K_M
10
+ - 8b
11
+ - 3-bit
12
+ - internlm3
13
+ - llama-cpp
14
+ - internlm
15
+ - code
16
+ - math
17
+ - chat
18
+ - roleplay
19
+ - text-generation
20
+ - safetensors
21
+ - nlp
22
+ - code
23
+ ---
24
+
25
+ # roleplaiapp/internlm3-8b-instruct-Q3_K_M-GGUF
26
+
27
+ **Repo:** `roleplaiapp/internlm3-8b-instruct-Q3_K_M-GGUF`
28
+ **Original Model:** `internlm3-8b-instruct`
29
+ **Organization:** `internlm`
30
+ **Quantized File:** `internlm3-8b-instruct-q3_k_m.gguf`
31
+ **Quantization:** `GGUF`
32
+ **Quantization Method:** `Q3_K_M`
33
+ **Use Imatrix:** `False`
34
+ **Split Model:** `False`
35
+
36
+ ## Overview
37
+ This is an GGUF Q3_K_M quantized version of [internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct).
38
+
39
+ ## Quantization By
40
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
41
+ I hope the community finds these quantizations useful.
42
+
43
+ Andrew Webby @ [RolePlai](https://roleplai.app/)