Llama-3-5B-Sheard
Pruned version of Llama-3-8b.
Tool used: PrunMe, Mergekit.
Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
Training
After sliced by mergekit, the model is continue-pretrained on minipile for 1 epoch and ~100k samples. Then we trained it using ORPO on Llama-3-70b generated DPO pairs.
Disclaimer
This model is for testing purposes only, and when the system prompt is not empty, the output may repeat and not stop!
Join our discord
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.