Update README.md
Browse files
README.md
CHANGED
@@ -42,12 +42,12 @@ During testing, Designant punched well above its weight class in terms of parame
|
|
42 |
|
43 |
|
44 |
EXL3:
|
45 |
-
-
|
46 |
-
|
47 |
-
MLX:
|
48 |
-
- TODO!
|
49 |
|
50 |
GGUF:
|
|
|
|
|
|
|
51 |
- TODO!
|
52 |
|
53 |
# Usage
|
@@ -80,6 +80,8 @@ Both stages here are very similar to [Q3-30B-A3B-Designant](https://huggingface.
|
|
80 |
|
81 |
- Axolotl, Unsloth, Huggingface - Making the frameworks used to train this model (Axolotl was used for the SFT process, and Unsloth+TRL was used for the KTO process)
|
82 |
|
|
|
|
|
83 |
We would like to thank the Allura community on Discord, especially Curse, Heni, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us <3
|
84 |
|
85 |
---
|
|
|
42 |
|
43 |
|
44 |
EXL3:
|
45 |
+
- [Official EXL3 quant repo](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-EXL3)
|
|
|
|
|
|
|
46 |
|
47 |
GGUF:
|
48 |
+
- [Official static GGUF quants](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-GGUF)
|
49 |
+
|
50 |
+
MLX:
|
51 |
- TODO!
|
52 |
|
53 |
# Usage
|
|
|
80 |
|
81 |
- Axolotl, Unsloth, Huggingface - Making the frameworks used to train this model (Axolotl was used for the SFT process, and Unsloth+TRL was used for the KTO process)
|
82 |
|
83 |
+
- All quanters, inside and outside the org, specifically Artus and Lyra
|
84 |
+
|
85 |
We would like to thank the Allura community on Discord, especially Curse, Heni, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us <3
|
86 |
|
87 |
---
|