Papers
arxiv:2502.18934

Kanana: Compute-efficient Bilingual Language Models

Published on Feb 26
· Submitted by bzantium on Feb 27
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

We introduce Kanana, a series of bilingual language models that demonstrate exceeding performance in Korean and competitive performance in English. The computational cost of Kanana is significantly lower than that of state-of-the-art models of similar size. The report details the techniques employed during pre-training to achieve compute-efficient yet competitive models, including high quality data filtering, staged pre-training, depth up-scaling, and pruning and distillation. Furthermore, the report outlines the methodologies utilized during the post-training of the Kanana models, encompassing supervised fine-tuning and preference optimization, aimed at enhancing their capability for seamless interaction with users. Lastly, the report elaborates on plausible approaches used for language model adaptation to specific scenarios, such as embedding, retrieval augmented generation, and function calling. The Kanana model series spans from 2.1B to 32.5B parameters with 2.1B models (base, instruct, embedding) publicly released to promote research on Korean language models.

Community

Paper author Paper submitter
edited 1 day ago

Kakao just released Kanana LLM technical paper and 2.1B model series (base, instruct and embedding)!
models: https://huggingface.co/collections/kakaocorp/kanana-nano-21b-67a326cda1c449c8d4172259
github: https://github.com/kakao/kanana

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.18934 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 2