h2o-danube-1.8b-chat GGUF
Original model: h2o-danube-1.8b-chat Model creator: h2oai
This repo contains GGUF format model files for h2oai’s h2o-danube-1.8b-chat.
h2o-danube-1.8b-chat is an chat fine-tuned model by H2O.ai with 1.8 billion parameters. For details, please refer to our Technical Report. We release three versions of this model:
- h2oai/h2o-danube-1.8b-base Base model
- h2oai/h2o-danube-1.8b-sft SFT tuned
- h2oai/h2o-danube-1.8b-chat SFT + DPO tuned
We adjust the Llama 2 architecture for a total of around 1.8b parameters. We use the original Llama 2 tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 16,384. We incorporate the sliding window attention from mistral with a size of 4,096.
Refer to h2o.ai’s model disclaimer for terms of use.
What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp b2037 (1cfb537)
Prompt template:
<|system|>{{system_message}}</s>
<|prompt|>{{prompt}}</s>
<|answer|>
Download & run with cnvrs on iPhone, iPad, and Mac!
cnvrs is the best app for private, local AI on your device:
- create & save Characters with custom system prompts & temperature settings
- download and experiment with any GGUF model you can find on HuggingFace!
- make it your own with custom Theme colors
- powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
- try it out yourself today, on Testflight!
- follow cnvrs on twitter to stay up to date
Original Model Evaluations:
Commonsense, world-knowledge and reading comprehension tested in 0-shot:
Benchmark | acc_n |
---|---|
ARC-easy | 67.51 |
ARC-challenge | 39.25 |
BoolQ | 77.89 |
Hellaswag | 67.60 |
OpenBookQA | 39.20 |
PiQA | 76.71 |
TriviaQA | 36.29 |
Winogrande | 65.35 |
- Downloads last month
- 20
Model tree for brittlewis12/h2o-danube-1.8b-chat-GGUF
Base model
h2oai/h2o-danube-1.8b-chat