Jonathan Pacifico's picture

Jonathan Pacifico

jpacifico

AI & ML interests

LLM, SLM, Quantization, Fine-Tuning

Recent Activity

updated a model 4 days ago
jpacifico/Lucie-7B-Instruct-DPO-v1.1
liked a model 4 days ago
Qwen/QwQ-32B
View all activity

Organizations

OpenLLM France's profile picture AIffl : AI For French Language's profile picture Intelligent Estate's profile picture

jpacifico's activity

replied to sometimesanotion's post 13 days ago
reacted to sometimesanotion's post with ๐Ÿ‘ 14 days ago
view post
Post
4611
I'd like to draw your attention to a Lamarck-based experiment which uses Arcee AI's newly published arcee_fusion merge method for three out of its four merges. Yes, just four. This is a simple one, and its recipe is fully open:

https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion

It unifies three branches, all of which feature models which bring Lamarck-14B-v0.7 and Qwenvergence-14B-v12-Prose together. One side features @jpacifico 's jpacifico/Chocolatine-2-14B-Instruct-v2.0.3 and the other features @suayptalha 's suayptalha/Lamarckvergence-14B paired with my models which were their merge ancestors.

A fusion merge - of a fusion merge and a SLERP of a fusion and older merge - should demonstrate the new merge method's behavior in interesting ways, especially in the first 1/4th of the model where the SLERP has less impact.

I welcome you to kick the tires and learn from it. It has prose quality near Qwenvergence v12's - as you'd expect.

Thank you, @mradermacher and @MaziyarPanahi , for the first-day quantizations! Your work helped get me started. https://huggingface.co/models?other=base_model:quantized:sometimesanotion/Lamarck-14B-v0.7-Fusion
ยท