File size: 3,357 Bytes
e5d6b8e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
base_model: h2oai/h2o-danube-1.8b-chat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
- Intel/orca_dpo_pairs
- argilla/distilabel-math-preference-dpo
- Open-Orca/OpenOrca
- OpenAssistant/oasst2
- HuggingFaceH4/ultrachat_200k
- meta-math/MetaMathQA
license: apache-2.0
language:
- en
model_creator: h2oai
model_name: h2o-danube-1.8b-chat
model_type: mistral
inference: false
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
pipeline_tag: text-generation
prompt_template: |
<|system|>{{system_message}}</s>
<|prompt|>{{prompt}}</s>
<|answer|>
quantized_by: brittlewis12
---
# h2o-danube-1.8b-chat GGUF
Original model: [h2o-danube-1.8b-chat](https://huggingface.co/h2oai/h2o-danube-1.8b-chat)
Model creator: [h2oai](https://huggingface.co/h2oai)
This repo contains GGUF format model files for h2oai’s h2o-danube-1.8b-chat.
> h2o-danube-1.8b-chat is an chat fine-tuned model by H2O.ai with 1.8 billion parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We release three versions of this model:
>
> - h2oai/h2o-danube-1.8b-base Base model
> - h2oai/h2o-danube-1.8b-sft SFT tuned
> - h2oai/h2o-danube-1.8b-chat SFT + DPO tuned
> We adjust the Llama 2 architecture for a total of around 1.8b parameters. We use the original Llama 2 tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 16,384. We incorporate the sliding window attention from mistral with a size of 4,096.
Refer to h2o.ai’s model [disclaimer](https://huggingface.co/h2oai/h2o-danube-1.8b-chat/blob/main/README.md#disclaimer) for terms of use.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp b2037 ([1cfb537](https://github.com/ggerganov/llama.cpp/commits/1cfb5372cf5707c8ec6dde7c874f4a44a6c4c915))
### Prompt template:
```
<|system|>{{system_message}}</s>
<|prompt|>{{prompt}}</s>
<|answer|>
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg)
[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluations:
> Commonsense, world-knowledge and reading comprehension tested in 0-shot:
| Benchmark | acc_n |
|:--------------|:--------:|
| ARC-easy | 67.51 |
| ARC-challenge | 39.25 |
| BoolQ | 77.89 |
| Hellaswag | 67.60 |
| OpenBookQA | 39.20 |
| PiQA | 76.71 |
| TriviaQA | 36.29 |
| Winogrande | 65.35 |
|