YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

DRC RAG LLM v1

📄 Technical_Report

DRC-RAG-LLM is a Retrieval-Augmented Generation (RAG) scenario–optimized LLM based on Qwen1.5. We developed two RAG scenarios tasks—Retrieve Chunk Citation and Chinese Textual Table Understanding, then fine-tuned DRC-RAG-LLM on these tasks while preserving most of the general language capabilities of the base model. In addition, DRC-RAG-LLM demonstrates strong performance compared to ChatGPT and GPT-4o on two RAG scenarios tasks.

Retrieve Chunk Citation

Model Precision Recall F1
Fine-tuned
DRC-RAG-LLM-7B 73.61 90.24 81.08
DRC-RAG-LLM-14B 79.55 91.71 85.20
Non-Fine-tuned
Breeze-7B-Instruct-v1_0 28.53 14.47 19.2
Llama3-TAIDE-LX-8B-Chat 39.74 32.03 30.39
Llama-3-Taiwan-8B-Instruct 29.89 10.08 16.08
Qwen1.5-7B-Chat 22.12 7.48 11.18
Qwen1.5-14B-Chat 36.82 15.45 21.76
OpenAI Models
ChatGPT 60.00 52.68 56.1
GPT-4o 62.02 66.33 64.1

Chinese Textual Table Understanding

Model Table QA Table summarization
Fine-tuned
DRC-RAG-LLM-7B 66.6 56.6
DRC-RAG-LLM-14B 75.6 61.0
Non-Fine-tuned
Breeze-7B-Instruct-v1_0 59.4 55.6
Llama3-TAIDE-LX-8B-Chat 55.6 48.2
Llama-3-Taiwan-8B-Instruct 63.8 38.6
Qwen1.5-7B-Chat 54.2 41.4
Qwen1.5-14B-Chat 62.4 55.4
OpenAI Models
ChatGPT 70.0 48.2
GPT-4o 82.6 85.9
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including drc-8/drc-rag-llm-v1-tech-report