Jinyang23 commited on
Commit
37d603f
·
verified ·
1 Parent(s): 8351188

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -3
README.md CHANGED
@@ -1,3 +1,81 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - question-answering
5
+ - text-generation
6
+ language:
7
+ - en
8
+ size_categories:
9
+ - 1K<n<10K
10
+ tags:
11
+ - rag
12
+ - noise
13
+ - benchmark
14
+ - retrieval-augmented-generation
15
+ - llm-evaluation
16
+ ---
17
+
18
+ # Dataset Card for NoiserBench
19
+
20
+ This dataset card describes NoiserBench, a comprehensive evaluation framework for analyzing the role of noise in Retrieval-Augmented Generation (RAG) systems with Large Language Models.
21
+
22
+ ## Dataset Details
23
+
24
+ ### Dataset Description
25
+
26
+ NoiserBench is a comprehensive benchmark designed to evaluate how different types of noise affect Large Language Models in Retrieval-Augmented Generation scenarios. The benchmark encompasses multiple datasets and reasoning tasks, specifically designed to analyze seven distinct noise types from a linguistic perspective. This framework reveals that noise can be categorized into two practical groups: beneficial noise (which may enhance model capabilities) and harmful noise (which generally impairs performance).
27
+
28
+ - **Language(s) (NLP):** English
29
+ - **License:** MIT
30
+ - **Paper:** [Pandora's Box or Aladdin's Lamp: A Comprehensive Analysis Revealing the Role of RAG Noise in Large Language Models](https://arxiv.org/abs/2408.13533)
31
+
32
+ ### Dataset Sources
33
+
34
+ - **Repository:** https://github.com/jinyangwu/NoiserBench
35
+ - **Paper:** https://arxiv.org/abs/2408.13533
36
+
37
+ ## Uses
38
+
39
+ NoiserBench is designed for:
40
+ - Evaluating the robustness of RAG systems under different noise conditions
41
+ - Analyzing how various noise types affect LLM performance in retrieval scenarios
42
+ - Benchmarking different LLM architectures and scales on noisy retrieval tasks
43
+ - Research into developing more robust and adaptable RAG solutions
44
+ - Understanding the distinction between beneficial and harmful noise in RAG contexts
45
+
46
+ ## Dataset Structure
47
+
48
+ The benchmark encompasses multiple datasets and reasoning tasks designed to evaluate seven distinct noise types from a linguistic perspective. The framework categorizes noise into:
49
+
50
+ 1. **Beneficial Noise**: Types of noise that may enhance model capabilities and overall performance
51
+ 2. **Harmful Noise**: Types of noise that generally impair LLM performance
52
+
53
+ The evaluation framework includes various reasoning tasks to comprehensively assess how different LLM architectures respond to these noise categories.
54
+
55
+ ## Citation
56
+
57
+ **BibTeX:**
58
+
59
+ ```bibtex
60
+ @article{wu2024pandora,
61
+ title={Pandora's Box or Aladdin's Lamp: A Comprehensive Analysis Revealing the Role of RAG Noise in Large Language Models},
62
+ author={Wu, Jinyang and Che, Feihu and Zhang, Chuyuan and Tao, Jianhua and Zhang, Shuai and Shao, Pengpeng},
63
+ journal={arXiv preprint arXiv:2408.13533},
64
+ year={2024}
65
+ }
66
+ ```
67
+
68
+ **APA:**
69
+
70
+ Wu, J., Che, F., Zhang, C., Tao, J., Zhang, S., & Shao, P. (2024). Pandora's Box or Aladdin's Lamp: A Comprehensive Analysis Revealing the Role of RAG Noise in Large Language Models. arXiv preprint arXiv:2408.13533.
71
+
72
+ ## Glossary
73
+
74
+ - **RAG (Retrieval-Augmented Generation)**: A method that combines information retrieval with text generation to reduce hallucinations in large language models
75
+ - **Beneficial Noise**: Types of noise that may enhance certain aspects of model capabilities and overall performance
76
+ - **Harmful Noise**: Types of noise that generally impair LLM performance in RAG scenarios
77
+ - **NoiserBench**: The comprehensive evaluation framework established in this work
78
+
79
+ ## Dataset Card Contact
80
+
81
+ For questions about this dataset card or the underlying benchmark, please refer to the code repository or contact me at [email protected].