LoRA Finetuning and Merging
#35
by
JVal123
- opened
I've finetuned a LoRA on custom data and merged it with this model.
Being the hugging face compatible version of Pixtral-12B, is there a way to modify the obtained merged model (shards, tokenizer and more) to match the original Pixtral-12B version which uses mistral-inference (mistralai/Pixtral-12B-2409)?
Hi,
For that you could write the reverse of the conversion script: https://github.com/huggingface/transformers/blob/main/src/transformers/models/pixtral/convert_pixtral_weights_to_hf.py