oncu commited on
Commit
e328d4b
·
verified ·
1 Parent(s): 6c7de9d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ library_name: transformers
4
+ language:
5
+ - en
6
+ - fr
7
+ - de
8
+ - es
9
+ - it
10
+ - pt
11
+ - ja
12
+ - ko
13
+ - zh
14
+ - ar
15
+ - el
16
+ - fa
17
+ - pl
18
+ - id
19
+ - cs
20
+ - he
21
+ - hi
22
+ - nl
23
+ - ro
24
+ - ru
25
+ - tr
26
+ - uk
27
+ - vi
28
+ license: cc-by-nc-4.0
29
+ extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license)\
30
+ \ and acknowledge that the information you provide will be collected, used, and\
31
+ \ shared in accordance with Cohere\u2019s [Privacy Policy]( https://cohere.com/privacy).\
32
+ \ You\u2019ll receive email updates about C4AI and Cohere research, events, products\
33
+ \ and services. You can unsubscribe at any time."
34
+ extra_gated_fields:
35
+ Name: text
36
+ Affiliation: text
37
+ Country: country
38
+ I agree to use this model for non-commercial use ONLY: checkbox
39
+ tags:
40
+ - abliterated
41
+ - uncensored
42
+ base_model:
43
+ - CohereForAI/aya-expanse-8b
44
+ ---
45
+
46
+ # aya-expanse-8b-abliterated GGUF Quantized Models
47
+
48
+ ## Technical Details
49
+ - **Quantization Tool:** llama.cpp
50
+ - **Version:** version: 5340 (15e6125a)
51
+
52
+ ## Model Information
53
+ - **Base Model:** [huihui-ai/aya-expanse-8b-abliterated](https://huggingface.co/huihui-ai/aya-expanse-8b-abliterated)
54
+ - **Quantized by:** [oncu](https://huggingface.co/oncu)
55
+
56
+ ## Available Files
57
+ | 🚀 Download | 🔢 Type | 📝 Description |
58
+ |------------|---------|---------------|
59
+ | [Download](https://huggingface.co/oncu/aya-expanse-8b-abliterated-GGUF/resolve/main/aya-expanse-8b-abliterated.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) |
60
+
61
+ 💡 **Q4 K M** provides the best balance for most use cases