|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged - bnb 8bits |
|
- Model creator: https://huggingface.co/dhmeltzer/ |
|
- Original model: https://huggingface.co/dhmeltzer/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged/ |
|
|
|
|
|
|
|
|
|
Original model description: |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 44.13 | |
|
| ARC (25-shot) | 53.67 | |
|
| HellaSwag (10-shot) | 78.21 | |
|
| MMLU (5-shot) | 45.9 | |
|
| TruthfulQA (0-shot) | 46.13 | |
|
| Winogrande (5-shot) | 73.8 | |
|
| GSM8K (5-shot) | 4.7 | |
|
| DROP (3-shot) | 6.53 | |
|
|
|
|
|
|