File size: 2,891 Bytes
8c66544 ffa4c74 8c66544 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
license: mit
datasets:
- nvidia/Llama-Nemotron-Post-Training-Dataset
language:
- en
- es
- ar
- fr
base_model:
- ykarout/phi4-deepseek-r1-distilled-v8-GGUF
- microsoft/phi-4
library_name: transformers
tags:
- deepseek
- r1
- reasoning
- phi-4
- math
- code
- chemistry
- science
- biology
- art
- unsloth
- finance
- legal
- medical
- text-generation-inference
---
# Phi-4 DeepSeek Distilled v8 GGUF
This repository contains GGUF quantized versions of the Phi-4 DeepSeek R1 Distilled model. These GGUF files are optimized for local inference using frameworks like [llama.cpp](https://github.com/ggerganov/llama.cpp) and [Ollama](https://ollama.ai/) and LM Studio.
## Model Information
- **Base Model**: Phi-4 DeepSeek R1 Distilled
- **Parameters**: 14.7B
- **Architecture**: Phi3
- **Context Length**: 16384 tokens
- **Training Data**: Improved version of Phi-4, distilled with DeepSeek R1 Reasoning
- **License**: MIT
## Available Quantizations
| File | Quantization | Size | Use Case |
|------|-------------|------|----------|
Q8_0
Q6_K
Q5_K_M
Q4_K_M
## Chat Template
This model uses the ChatML format with the following structure:
```
<|im_start|>system<|im_sep|>System message here<|im_end|>
<|im_start|>user<|im_sep|>User message here<|im_end|>
<|im_start|>assistant<|im_sep|>Assistant response here<|im_end|>
```
## Usage with Ollama
Create a custom Modelfile (paste this into a file named `Modelfile`):
----------------------------------------------------------------------------------
FROM /replace/with/path/to/your/gguf-file.gguf
PARAMETER temperature 0.15
PARAMETER top_p 0.93
PARAMETER top_k 50
PARAMETER repeat_penalty 1.15
TEMPLATE """{{ if .System }}<|im_start|>system<|im_sep|>{{ .System }}<|im_end|>{{ end }}{{ range .Messages }}{{ if eq .Role "user" }}<|im_start|>user<|im_sep|>{{ .Content }}<|im_en>"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
------------------------------------------------------------------------
Then create and use your model:
ollama create phi4-deepseek-r1 -f Modelfile
ollama run phi4-deepseek-r1
## Usage with LMStudio
1. Use the model search option to look up the model from huggingface
2. Download and Load the Model
3. Set the chat parameters (top_p, top_k, repeat_penalty etc...)
4. Chat with the model (LMStudio directly detects the chat template so there is no manual configuration here unlike Ollama)
## Usage with llama.cpp
```bash
# Download the model from Hugging Face
wget https://huggingface.co/ykarout/phi4-deepseek-r1-distilled-v8-GGUF/resolve/main/phi4-deepseek-r1-distilled-v8-q8_0.gguf
# Run the model with llama.cpp
./main -m phi4-deepseek-r1-distilled-v8-q8_0.gguf -n 1024 --color -i -ins --chatml
```
## Benchmarks & Performance Notes
- Q8_0: Best quality, requires ~16GB VRAM for 4K context
- Q3_K_M: Good quality with 60% size reduction, suitable for systems with 8GB+ VRAM |