GGUF Quantized model of KULLM3 Korea University NLP AI Lab.

I did the conversion to GGUF, the whole model was built by NLP AI Lab, therefore, all my credits to them.

The model seems to work really well in GGUF and it seems a new step towards a fully usable korean LLM.

Amazing work!

The original repo: https://huggingface.co/nlpai-lab/KULLM3

Downloads last month
25
GGUF
Model size
10.7B params
Architecture
llama

4-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.