LoRA Finetuning and Merging

#35
by JVal123 - opened

I've finetuned a LoRA on custom data and merged it with this model.

Being the hugging face compatible version of Pixtral-12B, is there a way to modify the obtained merged model (shards, tokenizer and more) to match the original Pixtral-12B version which uses mistral-inference (mistralai/Pixtral-12B-2409)?

Unofficial Mistral Community org

Sign up or log in to comment