Mistral-AI
Collection
Quantized versions of models by mistralai
•
19 items
•
Updated
•
5
This is quantized version of princeton-nlp/Mistral-7B-Base-SFT-RDPO created using llama.cpp
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.
Base model
princeton-nlp/Mistral-7B-Base-SFT-RDPO