|
--- |
|
base_model: wolfram/miquliz-120b |
|
inference: false |
|
model_creator: Wolfram Ravenwolf |
|
model_name: miquliz-120b |
|
--- |
|
# miquliz-120b - Q4 GGUF |
|
|
|
- Model creator: [Wolfram Ravenwolf](https://huggingface.co/wolfram) |
|
- Original model: [miquliz-120b](https://huggingface.co/wolfram/miquliz-120b) |
|
|
|
## Description |
|
|
|
This repo contains Q4_K_S and Q4_K_M GGUF format model files for [Wolfram Ravenwolf's miquliz-120b](https://huggingface.co/wolfram/miquliz-120b). |
|
|
|
## Prompt template: Mistral |
|
|
|
``` |
|
[INST] {prompt} [/INST] |
|
``` |
|
|
|
## Provided files |
|
| Name | Quant method | Bits | Size | |
|
| ---- | ---- | ---- | ---- | |
|
| miquliz-120b.Q4_K_S.gguf | Q4_K_S | 4 | 66.81 GB| |
|
| miquliz-120b.Q4_K_M.gguf | Q4_K_M | 4 | 70.64 GB| |
|
|
|
Note: HF does not support uploading files larger than 50GB. Therefore the files are uploaded as split files. |