QuantFactory/Barcenas-14b-Phi-3-medium-ORPO-GGUF
This is quantized version of Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO created using llama.cpp
Original Model Card
Barcenas-14b-Phi-3-medium-ORPO
Model trained with the innovative ORPO method, based on the robust VAGOsolutions/SauerkrautLM-Phi-3-medium.
The model was trained with the dataset: mlabonne/orpo-dpo-mix-40k, which combines diverse data sources to enhance conversational capabilities and contextual understanding.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
- Downloads last month
- 224
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support