0xFarzad commited on
Commit
416feaa
·
verified ·
1 Parent(s): 73df739

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -9
README.md CHANGED
@@ -50,16 +50,142 @@ dataset_info:
50
  - name: text
51
  dtype: string
52
  splits:
53
- - name: train
54
- num_bytes: 28143581642
55
- num_examples: 2812737
56
-
57
  download_size: 11334496137
58
  dataset_size: 28143581642
59
  configs:
60
- - config_name: default
61
- data_files:
62
- - split: train
63
- path:
64
- - "data/part_*"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  - name: text
51
  dtype: string
52
  splits:
53
+ - name: train
54
+ num_bytes: 28143581642
55
+ num_examples: 2812737
 
56
  download_size: 11334496137
57
  dataset_size: 28143581642
58
  configs:
59
+ - config_name: default
60
+ data_files:
61
+ - split: train
62
+ path:
63
+ - data/part_*
64
+ language:
65
+ - en
66
+ pretty_name: FactCheck
67
+ tags:
68
+ - FactCheck
69
+ - knowledge-graph
70
+ - question-answering
71
+ - classification
72
+ - FactBench
73
+ - YAGO
74
+ - DBpedia
75
+ license: mit
76
+ task_categories:
77
+ - question-answering
78
+ size_categories:
79
+ - 1M<n<10M
80
+ ---
81
+ # Dataset Card for FactCheck
82
+
83
+ ## 📝 Dataset Summary
84
+ **FactCheck** is an benchmark for evaluating LLMs on **knowledge graph fact verification**. It combines structured facts from YAGO, DBpedia, and FactBench with web-extracted evidence including questions, summaries, full text, and metadata. The dataset contains examples designed for sentence-level fact-checking and QA tasks.
85
+
86
+ ## 📚 Supported Tasks
87
+ - **Question Answering**: Answer fact-checking questions derived from KG triples.
88
+ - **Benchmarking LLMs**
89
+
90
+ ## 🗣 Languages
91
+ - English (`en`)
92
+ - Maybe the dataset contains Google Search Engine Results in other language too
93
+
94
+ ## 🧱 Dataset Structure
95
+ Each example includes metadata fields, such as:
96
+
97
+ | Field | Type | Description |
98
+ |------------------|----------|-------------|
99
+ | `identifier` | string | Unique ID per example |
100
+ | `dataset` | string | Source KG: YAGO, DBpedia, or FactBench |
101
+ | `question` | string | Question derived from the fact |
102
+ | `rank` | int | Relevance rank of question/page |
103
+ | `url`, `read_more_link` | string | Web source links |
104
+ | `title`, `summary`, `text` | string | Extracted HTML content |
105
+ | `images`, `movies` | [string] | Media assets |
106
+ | `keywords`, `meta_keywords`, `tags`, `authors`, `publish_date`, `meta_description`, `meta_site_name`, `top_image`, `meta_img`, `canonical_link` | string or [string] | Additional metadata |
107
+
108
+ ## 🚦 Data Splits
109
+ Only a **train** split is available, aggregated across 13 source files.
110
+
111
+ ## 🛠 Dataset Creation
112
+
113
+ ### Curation Rationale
114
+ Constructed to benchmark LLM performance on structured KG verification, with and without external evidence.
115
+
116
+ ### Source Data
117
+ - **FactBench**: ~2,800 facts
118
+ - **YAGO**: ~1,400 facts
119
+ - **DBpedia**: ~9,300 facts
120
+ - Web-scraped evidence using Google SERP for contextual support.
121
+
122
+ ### Processing Steps
123
+ - Facts retrieved and paired with search queries.
124
+ - Web pages were scraped, parsed, cleaned, and stored.
125
+ - Metadata normalized across all sources.
126
+ - Optional ranking and filtering applied to prioritize high-relevance evidence.
127
+
128
+ ### Provenance
129
+ Compiled by the FactCheck‑AI team, anchored in public sources (KGs + web content).
130
+
131
+ ## ⚠️ Personal & Sensitive Information
132
+ The FactCheck dataset does not contain personal or private data. All information is sourced from publicly accessible knowledge graphs (YAGO, DBpedia, FactBench) and web-extracted evidence. However, if you identify any content that you believe may be in conflict with privacy standards or requires further review, please contact us. We are committed to addressing such concerns promptly and making necessary adjustments.
133
+
134
+ ## 🧑‍💻 Dataset Curators
135
+ FactCheck‑AI Team:
136
+
137
+ - **Farzad Shami** - University of Padua - [[email protected]](mailto:[email protected])
138
+ - **Stefano Marchesin** - University of Padua - [[email protected]](mailto:[email protected])
139
+ - **Gianmaria Silvello** - University of Padua - [[email protected]](mailto:[email protected])
140
+
141
+ ## ✉️ Contact
142
+ For issues or questions, please raise a GitHub issue on this repo.
143
+
144
  ---
145
+
146
+ ### ✅ SQL Queries for Interactive Analysis
147
+
148
+ Here are useful queries users can run in the Hugging Face SQL Console to analyze this dataset:
149
+
150
+ ```sql
151
+ -- 1. Count of rows per source KG
152
+ SELECT dataset, COUNT(*) AS count
153
+ FROM train
154
+ GROUP BY dataset
155
+ ORDER BY count DESC;
156
+ ````
157
+
158
+ ```sql
159
+ -- 2. Daily entry counts based on publish_date
160
+ SELECT publish_date, COUNT(*) AS count
161
+ FROM train
162
+ GROUP BY publish_date
163
+ ORDER BY publish_date;
164
+ ```
165
+
166
+ ```sql
167
+ -- 3. Count of missing titles or summaries
168
+ SELECT
169
+ SUM(CASE WHEN title IS NULL OR title = '' THEN 1 ELSE 0 END) AS missing_title,
170
+ SUM(CASE WHEN summary IS NULL OR summary = '' THEN 1 ELSE 0 END) AS missing_summary
171
+ FROM train;
172
+ ```
173
+
174
+ ```sql
175
+ -- 4. Top 5 most frequent host domains
176
+ SELECT
177
+ SUBSTR(url, INSTR(url, '://')+3, INSTR(SUBSTR(url, INSTR(url,'://')+3),'/')-1) AS domain,
178
+ COUNT(*) AS count
179
+ FROM train
180
+ GROUP BY domain
181
+ ORDER BY count DESC
182
+ LIMIT 5;
183
+ ```
184
+
185
+ ```sql
186
+ -- 5. Average number of keywords per example
187
+ SELECT AVG(array_length(keywords, 1)) AS avg_keywords
188
+ FROM train;
189
+ ```
190
+
191
+ These queries offer insights into data coverage, quality, and structure.