Change main weights precision to bfloat16

#5
by Molbap HF staff - opened

This changes the base weights to be bfloat16 by default instead of float32.

Molbap changed pull request title from Upload MistralForCausalLM to Change main weights precision to bfloat16

you dont need to as you can use the unsloth in ntraining to do this with the trainer !
FP16 is the correct format : when loading in transformers you can specify to chage it there while loading also using bits and bytes !

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment