FP8 Dynamic Quantized kakaocorp/kanana-1.5-8b-instruct-2505
1. What FP8โDynamic Quantization Is
- FP8 format
- 8โbit floatingโpoint (1 sign bit +โฏ5 exponent bits +โฏ2 mantissa bits).
- Drastically shrinks weight/activation size while keeping floatingโpoint behavior.
- Dynamic scheme (
FP8_DYNAMIC
)- Weights: static, perโchannel quantization (each outโfeature channel has its own scale).
- Activations: dynamic, perโtoken quantization (scales are recomputed onโtheโfly for every input token).
- RTN (RoundโToโNearest) PTQ
- Postโtraining; no backโprop required.
- No calibration dataset needed because:
- Weights use symmetric RTN.
- Activations are quantized dynamically at inference time.
2. Serving the FP8 Dynamic Model with vLLM
If your card does not support NCL P2P, such as A40 or RTX4090/5090, you need to add the options below. For reference, the Dense model can be used after compression with FP8 Dynamic.
export NCCL_P2P_DISABLE=1
In GPU 2 units, with KV Cache 90%, Max token 32768
vllm serve BCCard/kanana-1.5-8b-instruct-2505-FP8-Dynamic \
--tensor-parallel-size 2 \
--gpu-memory-utilization 0.9 \
--max-model-len 32768 \
--enforce-eager \
--api-key bccard \
--served-model-name kanana-1.5-8b-instruct
3. Quantization Code WalkโThrough (Shared Knowledges)
LLM Compressor is an easy-to-use library for optimizing models for deployment with vllm, including:
Comprehensive set of quantization algorithms for weight-only and activation quantization Seamless integration with Hugging Face models and repositories safetensors-based file format compatible with vllm Large model support via accelerate
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_ID = "kakaocorp/kanana-1.5-8b-instruct-2505"
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
# Configure the quantization algorithm and scheme.
# In this case, we:
# * quantize the weights to fp8 with per channel via ptq
# * quantize the activations to fp8 with dynamic per token
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# Apply quantization.
oneshot(model=model, recipe=recipe)
# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to("cuda")
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")
# Save to disk in compressed-tensors format.
SAVE_DIR = "BCCard/kanana-1.5-8b-instruct-2505-FP8-Dynamic"
model.save_pretrained(SAVE_DIR)
tokenizer.save_pretrained(SAVE_DIR)
processor.save_pretrained(SAVE_DIR)
4. Kanana 1.5 model card
Kanana 1.5
, a newly introduced version of the Kanana model family, presents substantial enhancements in coding, mathematics, and function calling capabilities over the previous version, enabling broader application to more complex real-world problems. This new version now can handle up to 32K tokens length natively and up to 128K tokens using YaRN, allowing the model to maintain coherence when handling extensive documents or engaging in extended conversations. Furthermore, Kanana 1.5 delivers more natural and accurate conversations through a refined post-training process.
Performance
Base Model Evaluation
Models | MMLU | KMMLU | HAERAE | HumanEval | MBPP | GSM8K |
---|---|---|---|---|---|---|
Kanana-1.5-8B | 64.24 | 48.94 | 82.77 | 61.59 | 57.80 | 63.53 |
Kanana-8B | 64.22 | 48.30 | 83.41 | 40.24 | 51.40 | 57.09 |
Instruct Model Evaluation
Models | MT-Bench | KoMT-Bench | IFEval | HumanEval+ | MBPP+ | GSM8K (0-shot) | MATH | MMLU (0-shot, CoT) | KMMLU (0-shot, CoT) | FunctionChatBench |
---|---|---|---|---|---|---|---|---|---|---|
Kanana-1.5-8B* | 7.76 | 7.63 | 80.11 | 76.83 | 67.99 | 87.64 | 67.54 | 68.82 | 48.28 | 58.00 |
Kanana-8B | 7.13 | 6.92 | 76.91 | 62.20 | 43.92 | 79.23 | 37.68 | 66.50 | 47.43 | 17.37 |
* Models released under Apache 2.0 are trained on the latest versions compared to other models.
Processing 32K+ Length
Currently, the config.json
uploaded to HuggingFace is configured for token lengths of 32,768 or less. To process tokens beyond this length, YaRN must be applied. By updating the config.json
with the following parameters, you can apply YaRN to handle token sequences up to 128K in length:
"rope_scaling": {
"factor": 4.4,
"original_max_position_embeddings": 32768,
"type": "yarn",
"beta_fast": 64,
"beta_slow": 2
},
Contributors
- Language Model Training: Yunju Bak, Doohae Jung, Boseop Kim, Nayeon Kim, Hojin Lee, Jaesun Park, Minho Ryu
- Language Model Alignment: Jiyeon Ham, Seungjae Jung, Hyunho Kim, Hyunwoong Ko, Changmin Lee, Daniel Wontae Nam
- AI Engineering: Youmin Kim, Hyeongju Kim
- Quantizatization: Taeyoung Lee
Citation
@misc{kananallmteam2025kananacomputeefficientbilinguallanguage,
title={Kanana: Compute-efficient Bilingual Language Models},
author={Kanana LLM Team and Yunju Bak and Hojin Lee and Minho Ryu and Jiyeon Ham and Seungjae Jung and Daniel Wontae Nam and Taegyeong Eo and Donghun Lee and Doohae Jung and Boseop Kim and Nayeon Kim and Jaesun Park and Hyunho Kim and Hyunwoong Ko and Changmin Lee and Kyoung-Woon On and Seulye Baeg and Junrae Cho and Sunghee Jung and Jieun Kang and EungGyun Kim and Eunhwa Kim and Byeongil Ko and Daniel Lee and Minchul Lee and Miok Lee and Shinbok Lee and Gaeun Seo},
year={2025},
eprint={2502.18934},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.18934},
}
Contact
- Kanana LLM Team Technical Support: [email protected]
- Business & Partnership Contact: [email protected]
- BCCard Quantized LLMs Engineering Support : [email protected]
- Downloads last month
- 3
Model tree for BCCard/kanana-1.5-8b-instruct-2505-FP8-Dynamic
Base model
kakaocorp/kanana-1.5-8b-instruct-2505