Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,24 @@ tags:
|
|
18 |
This model was converted to GGUF format from [`sometimesanotion/Lamarck-14B-v0.7`](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7) for more details on the model.
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
## Use with llama.cpp
|
22 |
Install llama.cpp through brew (works on Mac and Linux)
|
23 |
|
|
|
18 |
This model was converted to GGUF format from [`sometimesanotion/Lamarck-14B-v0.7`](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7) for more details on the model.
|
20 |
|
21 |
+
---
|
22 |
+
Lamarck 14B v0.7: A generalist merge with emphasis on multi-step reasoning, prose, and multi-language ability. The 14B parameter model class has a lot of strong performers, and Lamarck strives to be well-rounded and solid.
|
23 |
+
|
24 |
+
Lamarck is produced by a custom toolchain to automate a process of LoRAs and layer-targeting merges:
|
25 |
+
|
26 |
+
Extracted LoRA adapters from special-purpose merges
|
27 |
+
Custom base models and model_stocks, with LoRAs from from huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 to minimize IFEVAL loss often seen in model_stock merges
|
28 |
+
Separate branches for aggressive breadcrumbs and conservative DELLA merges
|
29 |
+
Highly targeted weight/density gradients for every 2-4 layers, at each stage
|
30 |
+
Finalization through SLERP+TIES merges recombining the breadcrumbs and DELLA branches to taste
|
31 |
+
|
32 |
+
Lamarck's performance comes from an ancestry that goes back through careful merges to select finetuning work, upcycled and combined. Through intermediate merges, arcee-ai/Virtuoso-Small sthenno-com/miscii-14b-1225 and VAGOsolutions/SauerkrautLM-v2-14b-DPO are emphasized in early layers for extra BBH; later layers add synergistic influence from deepseek-ai/DeepSeek-R1-Distill-Qwen-14B, Krystalan/DRT-o1-14B, EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2, and CultriX/Qwen2.5-14B-Wernicke.
|
33 |
+
|
34 |
+
More subjectively, its prose and translation abilities are boosted by repeated re-emphasis of Krystalan/DRT-o1-14B and underwoods/medius-erebus-magnum-14b. Other models found in sometimesanotion/Qwenvergence-14B-v3-Prose have their impact on prose quality - and surprising synergy of reasoning.
|
35 |
+
|
36 |
+
Kudos to @arcee-ai, @deepseek-ai, @Krystalan, @underwoods, @VAGOSolutions, @CultriX, @sthenno-com, and @rombodawg whose models had the most influence. Vimarckoso v3 has the model card which documents its extended lineage.
|
37 |
+
|
38 |
+
---
|
39 |
## Use with llama.cpp
|
40 |
Install llama.cpp through brew (works on Mac and Linux)
|
41 |
|