File size: 964 Bytes
4073832
 
 
 
 
 
 
 
51a681b
4073832
b97c996
4073832
51a681b
6f41772
 
 
 
51a681b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
language:
- nl
pipeline_tag: text-generation
tags:
- granite
- granite 3.0
- schaapje
- chat
license: apache-2.0
inference: false
---

<p align="center">
  <img src="sheep.png" alt="Schaapje logo" width="750"/>
</p>

# Schaapje-2B-Chat-V1.0-GGUF

## Introduction

This is a collection of GGUF files created from [Schaapje-2B-Chat-V1.0](https://huggingface.co/robinsmits/Schaapje-2B-Chat-V1.0)

It contains the files in the following quantization formats:

`Q5_0`, `Q5_K_M`, `Q6_K`, `Q8_0`

## Requirements
Before you can use the GGUF files you need to clone [llama.cpp repository](https://github.com/ggerganov/llama.cpp) and install it following the official guide.

## Recommendation

Experimenting with the llama.cpp parameters can have a big impact on the quality of the generated text. It is therefore recommended to do your own experimentation with different settings. In my own experiments it looks like quantization 'Q5_0' or better gives good quality.