FactCheck / README.md
0xFarzad's picture
Update README.md
06abbd4 verified
metadata
dataset_info:
  features:
    - name: identifier
      dtype: string
    - name: dataset
      dtype: string
    - name: question
      dtype: string
    - name: rank
      dtype: int64
    - name: url
      dtype: string
    - name: read_more_link
      dtype: string
    - name: language
      dtype: string
    - name: title
      dtype: string
    - name: top_image
      dtype: string
    - name: meta_img
      dtype: string
    - name: images
      sequence: string
    - name: movies
      sequence: string
    - name: keywords
      sequence: 'null'
    - name: meta_keywords
      sequence: string
    - name: tags
      dtype: 'null'
    - name: authors
      sequence: string
    - name: publish_date
      dtype: string
    - name: summary
      dtype: string
    - name: meta_description
      dtype: string
    - name: meta_lang
      dtype: string
    - name: meta_favicon
      dtype: string
    - name: meta_site_name
      dtype: string
    - name: canonical_link
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 28143581642
      num_examples: 2812737
  download_size: 11334496137
  dataset_size: 28143581642
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - data/part_*
language:
  - en
pretty_name: FactCheck
tags:
  - FactCheck
  - knowledge-graph
  - question-answering
  - classification
  - FactBench
  - YAGO
  - DBpedia
  - LLM-factuality
  - fact-checking
license: mit
task_categories:
  - question-answering
size_categories:
  - 1M<n<10M

Dataset Card for FactCheck

πŸ“ Dataset Summary

FactCheck is an benchmark for evaluating LLMs on knowledge graph fact verification. It combines structured facts from YAGO, DBpedia, and FactBench with web-extracted evidence including questions, summaries, full text, and metadata. The dataset contains examples designed for sentence-level fact-checking and QA tasks.

πŸ“š Supported Tasks

  • Question Answering: Answer fact-checking questions derived from KG triples.
  • Benchmarking LLMs

πŸ—£ Languages

  • English (en)
  • Maybe the dataset contains Google Search Engine Results in other language too

🧱 Dataset Structure

Each example includes metadata fields, such as:

Field Type Description
identifier string Unique ID per example
dataset string Source KG: YAGO, DBpedia, or FactBench
question string Question derived from the fact
rank int Relevance rank of question/page
url, read_more_link string Web source links
title, summary, text string Extracted HTML content
images, movies [string] Media assets
keywords, meta_keywords, tags, authors, publish_date, meta_description, meta_site_name, top_image, meta_img, canonical_link string or [string] Additional metadata

🚦 Data Splits

Only a train split is available, aggregated across 13 source files.

πŸ›  Dataset Creation

Curation Rationale

Constructed to benchmark LLM performance on structured KG verification, with and without external evidence.

Source Data

  • FactBench: ~2,800 facts
  • YAGO: ~1,400 facts
  • DBpedia: ~9,300 facts
  • Web-scraped evidence using Google SERP for contextual support.

Processing Steps

  • Facts retrieved and paired with search queries.
  • Web pages were scraped, parsed, cleaned, and stored.
  • Metadata normalized across all sources.
  • Optional ranking and filtering applied to prioritize high-relevance evidence.

Provenance

Compiled by the FactCheck‑AI team, anchored in public sources (KGs + web content).

⚠️ Personal & Sensitive Information

The FactCheck dataset does not contain personal or private data. All information is sourced from publicly accessible knowledge graphs (YAGO, DBpedia, FactBench) and web-extracted evidence. However, if you identify any content that you believe may be in conflict with privacy standards or requires further review, please contact us. We are committed to addressing such concerns promptly and making necessary adjustments.

πŸ§‘β€πŸ’» Dataset Curators

FactCheck‑AI Team:

βœ‰οΈ Contact

For issues or questions, please raise a GitHub issue on this repo.


βœ… SQL Queries for Interactive Analysis

Here are useful queries users can run in the Hugging Face SQL Console to analyze this dataset:

-- 1. Count of rows per source KG
SELECT dataset, COUNT(*) AS count
FROM train
GROUP BY dataset
ORDER BY count DESC;
-- 2. Daily entry counts based on publish_date
SELECT publish_date, COUNT(*) AS count
FROM train
GROUP BY publish_date
ORDER BY publish_date;
-- 3. Count of missing titles or summaries
SELECT
  SUM(CASE WHEN title IS NULL OR title = '' THEN 1 ELSE 0 END) AS missing_title,
  SUM(CASE WHEN summary IS NULL OR summary = '' THEN 1 ELSE 0 END) AS missing_summary
FROM train;
-- 4. Top 5 most frequent host domains
SELECT
  SUBSTR(url, INSTR(url, '://')+3, INSTR(SUBSTR(url, INSTR(url,'://')+3),'/')-1) AS domain,
  COUNT(*) AS count
FROM train
GROUP BY domain
ORDER BY count DESC
LIMIT 5;
-- 5. Average number of keywords per example
SELECT AVG(array_length(keywords, 1)) AS avg_keywords
FROM train;

These queries offer insights into data coverage, quality, and structure.