Phi-4 converted for ExLlamaV2

ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.

Quant type File Size Vram*
phi-4 hb8 3bpw 3 bits per weight 6.66 GB 10,3 GB
phi-4 hb8 4bpw 4 bits per weight 8.36 GB 11,9 GB
phi-4 hb8 5bpw 5 bits per weight 10.1 GB 13,5 GB
phi-4 hb8 6bpw 6 bits per weight 11.8 GB 15,1 GB
phi-4 hb8 7bpw 7 bits per weight 13.5 GB 16,7 GB
phi-4 hb8 8bpw 8 bits per weight 15.2 GB 18,2 GB

*approximate value at 16k context, FP16 cache.


Phi-4 Model Card

Phi-4 Technical Report

Model Summary

Developers Microsoft Research
Description phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.

phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures
Architecture 14B parameters, dense decoder-only Transformer model
Context length 16384 tokens

Usage

Input Formats

Given the nature of the training data, phi-4 is best suited for prompts using the chat format as follows:

<|im_start|>system<|im_sep|>
You are a medieval knight and must provide explanations to modern people.<|im_end|>
<|im_start|>user<|im_sep|>
How should I explain the Internet?<|im_end|>
<|im_start|>assistant<|im_sep|>

With ExUI:

Add Phi-4 prompt format:

Edit/replace exui/backend/prompts.py with https://huggingface.co/cmh/phi-4_exl2/raw/main/backend/prompts.py

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cmh/phi-4_exl2

Base model

microsoft/phi-4
Finetuned
(70)
this model