mradermacher commited on
Commit
3978fc8
·
verified ·
1 Parent(s): 4b9737a

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -1,6 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  <!-- ### quantize_version: 2 -->
2
  <!-- ### output_tensor_quantised: 1 -->
3
  <!-- ### convert_type: hf -->
4
  <!-- ### vocab_type: -->
5
  <!-- ### tags: nicoboss -->
6
  weighted/imatrix quants of https://huggingface.co/Delta-Vector/Austral-32B-GLM4-Winton
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Delta-Vector/Austral-32B-GLM4-Winton
3
+ datasets:
4
+ - Delta-Vector/Tauri-Rep-Remover-KTO
5
+ - Delta-Vector/Orion-LN-V1-ShareGPT
6
+ - Delta-Vector/Orion-Personamaxx-RP
7
+ - Delta-Vector/Orion-Co-Writer-51K
8
+ - Delta-Vector/Orion-Praxis-Co-Writer
9
+ - Delta-Vector/Orion-Shoujo-AI-Filtered-ShareGPT
10
+ - Delta-Vector/Orion-PIPPA-Cleaned-V2
11
+ - Delta-Vector/Orion-Alpindale-LN-ShareGPT
12
+ - Delta-Vector/Orion-Deepseek-V3-RP-Filtered
13
+ - Delta-Vector/Orion-Books-V2-ShareGPT
14
+ - Delta-Vector/Orion-Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed
15
+ - Delta-Vector/Orion-RP-Guild
16
+ - Delta-Vector/Orion-Creative_Writing-Complexity
17
+ - Delta-Vector/Orion-Deepseek-R1-RP-Filtered
18
+ - Delta-Vector/Orion-Storium-Prefixed-Clean
19
+ - Delta-Vector/Orion-Misc-Sharegpt-Prefixed
20
+ - Delta-Vector/Orion-LIMARP-Complexity
21
+ - Delta-Vector/Orion-BlueSky-10K-Complexity
22
+ - Delta-Vector/Orion-OpenCAI-ShareGPT
23
+ - Delta-Vector/Orion-Roleplay-Logs-Sharegpt-Ngram-cleaned
24
+ - Delta-Vector/Orion-vanilla-backrooms-claude-sharegpt
25
+ language:
26
+ - en
27
+ library_name: transformers
28
+ license: apache-2.0
29
+ mradermacher:
30
+ readme_rev: 1
31
+ quantized_by: mradermacher
32
+ tags:
33
+ - roleplay
34
+ - finetune
35
+ - axolotl
36
+ - adventure
37
+ - creative-writing
38
+ - GLM4
39
+ - 32B
40
+ ---
41
+ ## About
42
+
43
  <!-- ### quantize_version: 2 -->
44
  <!-- ### output_tensor_quantised: 1 -->
45
  <!-- ### convert_type: hf -->
46
  <!-- ### vocab_type: -->
47
  <!-- ### tags: nicoboss -->
48
  weighted/imatrix quants of https://huggingface.co/Delta-Vector/Austral-32B-GLM4-Winton
49
+
50
+ <!-- provided-files -->
51
+
52
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Austral-32B-GLM4-Winton-i1-GGUF).***
53
+
54
+ static quants are available at https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-GGUF
55
+ ## Usage
56
+
57
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
58
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
59
+ more details, including on how to concatenate multi-part files.
60
+
61
+ ## Provided Quants
62
+
63
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
64
+
65
+ | Link | Type | Size/GB | Notes |
66
+ |:-----|:-----|--------:|:------|
67
+ | [GGUF](https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-i1-GGUF/resolve/main/Austral-32B-GLM4-Winton.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
68
+ | [GGUF](https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-i1-GGUF/resolve/main/Austral-32B-GLM4-Winton.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
69
+ | [GGUF](https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-i1-GGUF/resolve/main/Austral-32B-GLM4-Winton.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.8 | optimal size/speed/quality |
70
+
71
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
72
+ types (lower is better):
73
+
74
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
75
+
76
+ And here are Artefact2's thoughts on the matter:
77
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
78
+
79
+ ## FAQ / Model Request
80
+
81
+ See https://huggingface.co/mradermacher/model_requests for some answers to
82
+ questions you might have and/or if you want some other model quantized.
83
+
84
+ ## Thanks
85
+
86
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
87
+ me use its servers and providing upgrades to my workstation to enable
88
+ this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
89
+
90
+ <!-- end -->