Finetune-RAG Model Checkpoints

This repository contains model checkpoints from the Finetune-RAG project, which aims to tackle hallucination in retrieval-augmented LLMs. Checkpoints here are saved at steps 12, 14, 16, 18, 20 from xml-format fine-tuning of Llama-3.1-8B-Instruct on Finetune-RAG.

Paper & Citation

@misc{lee2025finetuneragfinetuninglanguagemodels,
      title={Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation}, 
      author={Zhan Peng Lee and Andre Lin and Calvin Tan},
      year={2025},
      eprint={2505.10792},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.10792}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for pints-ai/Llama-3.1-8B-Instruct-RAG_XML_tuned-2

Finetuned
(1406)
this model

Collection including pints-ai/Llama-3.1-8B-Instruct-RAG_XML_tuned-2