Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
shenxq
/
zephyr-7b-dpo-qlora
like
0
PEFT
TensorBoard
Safetensors
snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
mistral
alignment-handbook
Generated from Trainer
trl
dpo
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Use this model
main
zephyr-7b-dpo-qlora
/
README.md
Commit History
End of training
de0c06e
verified
shenxq
commited on
Mar 17, 2024
Model save
29be7a4
verified
shenxq
commited on
Mar 17, 2024
End of training
d81199e
verified
shenxq
commited on
Mar 17, 2024
Model save
ac637be
verified
shenxq
commited on
Mar 17, 2024