Llamacpp imatrix Quantizations of Qwen2.5-14B-CIC-ACLARC

Using llama.cpp for quantization.

Original model: https://huggingface.co/sknow-lab/Qwen2.5-14B-CIC-ACLARC

Prompt format

<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Citation

@misc{koloveas2025llmspredictcitationintent,
      title={Can LLMs Predict Citation Intent? An Experimental Analysis of In-context Learning and Fine-tuning on Open LLMs}, 
      author={Paris Koloveas and Serafeim Chatzopoulos and Thanasis Vergoulis and Christos Tryfonopoulos},
      year={2025},
      eprint={2502.14561},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.14561}, 
}
Downloads last month
35
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sknow-lab/Qwen2.5-14B-CIC-ACLARC-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(101)
this model

Dataset used to train sknow-lab/Qwen2.5-14B-CIC-ACLARC-GGUF

Collection including sknow-lab/Qwen2.5-14B-CIC-ACLARC-GGUF