meta-llama/llama-2-7b-chat-hf
finetuned for 215 steps on meta-math/MetaMathQA-40K
. Training loss: 0.756800. Source code
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support