|
--- |
|
base_model: Delta-Vector/Austral-32B-GLM4-Winton |
|
datasets: |
|
- Delta-Vector/Tauri-Rep-Remover-KTO |
|
- Delta-Vector/Orion-LN-V1-ShareGPT |
|
- Delta-Vector/Orion-Personamaxx-RP |
|
- Delta-Vector/Orion-Co-Writer-51K |
|
- Delta-Vector/Orion-Praxis-Co-Writer |
|
- Delta-Vector/Orion-Shoujo-AI-Filtered-ShareGPT |
|
- Delta-Vector/Orion-PIPPA-Cleaned-V2 |
|
- Delta-Vector/Orion-Alpindale-LN-ShareGPT |
|
- Delta-Vector/Orion-Deepseek-V3-RP-Filtered |
|
- Delta-Vector/Orion-Books-V2-ShareGPT |
|
- Delta-Vector/Orion-Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed |
|
- Delta-Vector/Orion-RP-Guild |
|
- Delta-Vector/Orion-Creative_Writing-Complexity |
|
- Delta-Vector/Orion-Deepseek-R1-RP-Filtered |
|
- Delta-Vector/Orion-Storium-Prefixed-Clean |
|
- Delta-Vector/Orion-Misc-Sharegpt-Prefixed |
|
- Delta-Vector/Orion-LIMARP-Complexity |
|
- Delta-Vector/Orion-BlueSky-10K-Complexity |
|
- Delta-Vector/Orion-OpenCAI-ShareGPT |
|
- Delta-Vector/Orion-Roleplay-Logs-Sharegpt-Ngram-cleaned |
|
- Delta-Vector/Orion-vanilla-backrooms-claude-sharegpt |
|
language: |
|
- en |
|
library_name: transformers |
|
license: apache-2.0 |
|
mradermacher: |
|
readme_rev: 1 |
|
quantized_by: mradermacher |
|
tags: |
|
- roleplay |
|
- finetune |
|
- axolotl |
|
- adventure |
|
- creative-writing |
|
- GLM4 |
|
- 32B |
|
--- |
|
## About |
|
|
|
<!-- ### quantize_version: 2 --> |
|
<!-- ### output_tensor_quantised: 1 --> |
|
<!-- ### convert_type: hf --> |
|
<!-- ### vocab_type: --> |
|
<!-- ### tags: nicoboss --> |
|
weighted/imatrix quants of https://huggingface.co/Delta-Vector/Austral-32B-GLM4-Winton |
|
|
|
<!-- provided-files --> |
|
|
|
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Austral-32B-GLM4-Winton-i1-GGUF).*** |
|
|
|
static quants are available at https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-GGUF |
|
## Usage |
|
|
|
If you are unsure how to use GGUF files, refer to one of [TheBloke's |
|
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for |
|
more details, including on how to concatenate multi-part files. |
|
|
|
## Provided Quants |
|
|
|
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) |
|
|
|
| Link | Type | Size/GB | Notes | |
|
|:-----|:-----|--------:|:------| |
|
| [GGUF](https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-i1-GGUF/resolve/main/Austral-32B-GLM4-Winton.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | |
|
| [GGUF](https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-i1-GGUF/resolve/main/Austral-32B-GLM4-Winton.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | | |
|
| [GGUF](https://huggingface.co/mradermacher/Austral-32B-GLM4-Winton-i1-GGUF/resolve/main/Austral-32B-GLM4-Winton.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.8 | optimal size/speed/quality | |
|
|
|
Here is a handy graph by ikawrakow comparing some lower-quality quant |
|
types (lower is better): |
|
|
|
 |
|
|
|
And here are Artefact2's thoughts on the matter: |
|
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 |
|
|
|
## FAQ / Model Request |
|
|
|
See https://huggingface.co/mradermacher/model_requests for some answers to |
|
questions you might have and/or if you want some other model quantized. |
|
|
|
## Thanks |
|
|
|
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting |
|
me use its servers and providing upgrades to my workstation to enable |
|
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. |
|
|
|
<!-- end --> |
|
|