sometimesanotion PRO

sometimesanotion

AI & ML interests

Agentic LLM services, model merging, finetunes, distillation

Recent Activity

published a model 5 minutes ago
sometimesanotion/Lamarck-14B-v0.6
liked a dataset about 6 hours ago
augmxnt/deccp
updated a model about 19 hours ago
sometimesanotion/Lamarck-14B-v0.7
View all activity

Organizations

Hugging Face Discord Community's profile picture

Posts 1

view post
Post
2497
I've managed a #1 score of 41.22% average for 14B parameter models on the Open LLM Leaderboard. As of this writing, sometimesanotion/Lamarck-14B-v0.7 is #8 for all models up to 70B parameters.

It took a custom toolchain around Arcee AI's mergekit to manage the complex merges, gradients, and LoRAs required to make this happen. I really like seeing features of many quality finetunes in one solid generalist model.

datasets

None public yet