Edit model card

This is a FilCo context filtering model for knowledge-grounded dialog generation tasks, particularly in the open domain. Specifically, this is a meta-llama/Llama-2-7b trained using LoRA for 3 epochs on the Wizard of Wikipedia (WoW) training set.

It is intended to be used in FilCo: https://github.com/zorazrw/filco, but can be further applied in similar scenarios.

Citation

if you use this model for research, please cite:

@article{wang2023learning,
  title={Learning to Filter Context for Retrieval-Augmented Generation},
  author={Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, Graham Neubig},
  journal={arXiv preprint arXiv:2311.08377},
  year={2023}
}
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.