|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- FreedomIntelligence/RAG-Instruct |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
base_model: |
|
- meta-llama/Llama-3.1-8B |
|
pipeline_tag: text-generation |
|
--- |
|
## Introduction |
|
|
|
RAG-Instruct is a method for generating diverse and high-quality RAG instruction data. It synthesizes instruction datasets based on any source corpus, leveraging the following approaches: |
|
|
|
- **Five RAG paradigms**, which represent diverse query-document relationships to enhance model generalization across tasks. |
|
- **Instruction simulation**, which enriches instruction diversity and quality by utilizing the strengths of existing instruction datasets. |
|
|
|
Using this approach, we constructed [RAG-Instruct](https://huggingface.co/datasets/FreedomIntelligence/RAG-Instruct), covering a wide range of RAG scenarios and tasks. |
|
|
|
Our RAG-Instruct-Llama3-8B is trained on [RAG-Instruct](https://huggingface.co/datasets/FreedomIntelligence/RAG-Instruct) data, which significantly enhances the RAG ability of LLMs, demonstrating remarkable improvements in RAG performance across various tasks. |
|
|
|
| Model | WQA (acc) | PQA (acc) | TQA (acc) | OBQA (EM) | Pub (EM) | ARC (EM) | 2WIKI (acc) | HotP (acc) | MSQ (acc) | CFQA (EM) | PubMed (EM) | |
|
|--------------------------------|-----------|-----------|-----------|-----------|----------|----------|-------------|------------|-----------|-----------|-------------| |
|
| Llama3.1-8B | 59.5 | 60.8 | 73.4 | 82.0 | 56.7 | 77.1 | 65.6 | 45.6 | 18.7 | 56.5 | 73.9 | |
|
| Llama3.1-8B + **RAG-Instruct** | 69.7 | 68.4 | 79.3 | 84.8 | 77.2 | 79.9 | 79.3 | 56.4 | 30.3 | 57.8 | 77.0 | |
|
|
|
|
|
# <span>Usage</span> |
|
RAG-Instruct-Llama3-8B can be used just like `Llama-3.1-8B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference: |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
# Load the model and tokenizer |
|
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/RAG-Instruct-Llama3-8B",torch_dtype="auto",device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/RAG-Instruct-Llama3-8B") |
|
|
|
# Example input |
|
input_text = """### Paragraph: |
|
[1] structure is at risk from new development... |
|
[2] as Customs and Excise stores... |
|
[3] Powis Street is partly underway... |
|
... |
|
|
|
### Instruction: |
|
Which organization is currently using a building in Woolwich that holds historical importance? |
|
""" |
|
|
|
# Tokenize and prepare input |
|
messages = [{"role": "user", "content": input_text}] |
|
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True), return_tensors="pt").to(model.device) |
|
|
|
# Generate output |
|
outputs = model.generate(**inputs, max_new_tokens=2048) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Citation |
|
``` |
|
@misc{liu2024raginstructboostingllmsdiverse, |
|
title={RAG-Instruct: Boosting LLMs with Diverse Retrieval-Augmented Instructions}, |
|
author={Wanlong Liu and Junying Chen and Ke Ji and Li Zhou and Wenyu Chen and Benyou Wang}, |
|
year={2024}, |
|
eprint={2501.00353}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2501.00353}, |
|
} |
|
``` |