metadata
base_model: nbeerbower/bophades-mistral-math-DPO-7B
datasets:
- kyujinpy/orca_math_dpo
inference: false
library_name: transformers
license: apache-2.0
merged_models:
- nbeerbower/bophades-v2-mistral-7B
pipeline_tag: text-generation
quantized_by: Suparious
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
nbeerbower/bophades-mistral-math-DPO-7B AWQ
- Model creator: nbeerbower
- Original model: bophades-mistral-math-DPO-7B
Model Summary
bophades-v2-mistral-7B finetuned on kyujinpy/orca_math_dpo.
Finetuned using an A100 on Google Colab. 🙏
Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne