GGUF
Inference Endpoints
File size: 1,547 Bytes
76c7329
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
datasets:
  - Open-Orca/SlimOrca-Dedup
  - jondurbin/airoboros-3.2
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
## These are GGUF quants of [Sappha-2b-v3](https://huggingface.co/Fizzarolli/sappha-2b-v3). The original model card is below:

# sappha-2b-v3
a slightly less experimental qlora instruct finetune of the gemma-2b base model. trained with unsloth.

## benchmarks

|                        | gemma-2b-it | sappha-2b-v3 | dolphin-2.8-gemma-2b |
| ---------------------- | ----------- | ------------ | -------------------- |
| MMLU (five-shot)       | 36.98       | **38.02**    | 37.89                |
| HellaSwag (zero-shot)  | 49.22       | **51.70**    | 47.79                |
| PIQA (one-shot)        | 75.08       | **75.46**    | 71.16                |
| TruthfulQA (zero-shot) | **37.51**   | 31.65        | 37.15                |


## prompt format
basic chatml:
```
<|im_start|>system
You are a useful and helpful AI assistant.<|im_end|>
<|im_start|>user
what are LLMs?<|im_end|>
<|im_start|>assistant
LLMs, or Large Language Models, are advanced artificial intelligence systems that can perform tasks similar to human language. They are trained on vast amounts of data and can understand and respond to human queries. LLMs are often used in various applications, such as language translation, text generation, and question answering.<|im_end|>
```

## quants
gguf: https://huggingface.co/Fizzarolli/sappha-2b-v3-GGUF

## what happened to v2?
it was a private failure :)