Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'table_name', 'n_distinct', 'column_name'}) and 1 missing columns ({'cluster'}).

This happened while the csv dataset builder was generating data using

hf://datasets/Jerrylife/WikiDBGraph/data/column_cardinality.csv (at revision cb53123b52bd08f1f75981992b9d85c8975c1d4c)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              db_id: int64
              table_name: string
              column_name: string
              n_distinct: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 729
              to
              {'db_id': Value(dtype='int64', id=None), 'cluster': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1428, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 989, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 new columns ({'table_name', 'n_distinct', 'column_name'}) and 1 missing columns ({'cluster'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/Jerrylife/WikiDBGraph/data/column_cardinality.csv (at revision cb53123b52bd08f1f75981992b9d85c8975c1d4c)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

db_id
int64
cluster
int64
0
9
1
9
2
9
3
9
4
9
5
9
6
9
7
5
8
9
9
9
10
9
11
9
12
10
13
5
14
9
15
5
16
9
17
9
18
9
19
9
20
9
21
9
22
9
23
9
24
9
25
9
26
9
27
9
28
9
29
9
30
9
31
10
32
9
33
9
34
9
35
9
36
9
37
9
38
9
39
9
40
7
41
9
42
9
43
9
44
9
45
0
46
9
47
9
48
9
49
9
50
9
51
9
52
9
53
9
54
9
55
9
56
9
57
9
58
9
59
9
60
-1
61
9
62
9
63
10
64
9
65
9
66
10
67
9
68
9
69
9
70
9
71
9
72
9
73
9
74
9
75
9
76
9
77
9
78
9
79
9
80
9
81
9
82
2
83
9
84
9
85
9
86
9
87
9
88
9
89
9
90
9
91
9
92
9
93
9
94
9
95
9
96
9
97
9
98
9
99
10
End of preview.

WikiDBGraph Dataset

This document provides an overview of the datasets associated with the research paper on WikiDBGraph, a large-scale graph of interconnected relational databases.

Description: WikiDBGraph is a novel, large-scale graph where each node represents a relational database, and edges signify identified correlations or similarities between these databases. It is constructed from 100,000 real-world-like databases derived from Wikidata. The graph is enriched with a comprehensive set of node (database) and edge (inter-database relationship) properties, categorized into structural, semantic, and statistical features.

Source: WikiDBGraph is derived from the WikiDBs corpus (see below). The inter-database relationships (edges) are primarily identified using a machine learning model trained to predict database similarity based on their schema embeddings, significantly expanding upon the explicitly known links in the source data.

Key Characteristics:

  • Nodes: 100,000 relational databases.
  • Edges: Millions of identified inter-database relationships (the exact number depends on the similarity threshold $\tau$ used for graph construction).
  • Node Properties: Include database-level structural details (e.g., number of tables, columns, foreign keys), semantic information (e.g., pre-computed embeddings, topic categories from Wikidata, community IDs), and statistical measures (e.g., database size, total rows, column cardinalities, column entropies).
  • Edge Properties: Include structural similarity (e.g., Jaccard index on table/column names, Graph Edit Distance based metrics), semantic relatedness (e.g., cosine similarity of embeddings, prediction confidence), and statistical relationships (e.g., KL divergence of shared column distributions).

Usage in this Research: WikiDBGraph is the primary contribution of this paper. It is used to:

  • Demonstrate a methodology for identifying and representing inter-database relationships at scale.
  • Provide a rich resource for studying the landscape of interconnected databases.
  • Serve as a foundation for experiments in collaborative learning, showcasing its utility in feature-overlap and instance-overlap scenarios.

Availability:

  • Dataset (WikiDBGraph): The WikiDBGraph dataset, including the graph structure (edge lists for various thresholds $\tau$) and node/edge property files, will be made publicly available. (Please specify the URL here, e.g., Zenodo, GitHub, or your project website).
  • Code: The code used for constructing WikiDBGraph from WikiDBs, generating embeddings, and running the experiments will be made publicly available. (Please specify the URL here, e.g., GitHub repository).

License:

  • WikiDBGraph Dataset: Creative Commons Attribution 4.0 International (CC BY 4.0).
  • Associated Code: Apache License 2.0.

How to Use WikiDBGraph: The dataset is provided as a collection of files, categorized as follows:

A. Graph Structure Files: These files define the connections (edges) between databases (nodes) for different similarity thresholds ($\tau$).

  • filtered_edges_threshold_0.93.csv, filtered_edges_threshold_0.94.csv, filtered_edges_threshold_0.96.csv:
    • Meaning: CSV files representing edge lists. Each row typically contains a pair of database IDs that are connected at the specified similarity threshold.
    • How to Load: Use standard CSV parsing libraries (e.g., pandas.read_csv() in Python). These edge lists can then be used to construct graph objects in libraries like NetworkX (nx.from_pandas_edgelist()) or igraph.
  • filtered_edges_0.94_with_confidence.csv:
    • Meaning: An edge list in CSV format for $\tau=0.94$, which also includes the confidence score (similarity score) for each edge.
    • How to Load: Similar to other CSV edge lists, using pandas. The confidence score can be used as an edge weight.
  • graph_raw_0.93.dgl, graph_raw_0.94.dgl, graph_raw_0.96.dgl:
    • Meaning: These are graph objects serialized in the Deep Graph Library (DGL) format for different thresholds. They likely contain the basic graph structure (nodes and edges).
    • How to Load .dgl files: DGL provides functions to save and load graph objects. You would typically use:
      import dgl
      # Example for loading a graph
      graphs, _ = dgl.load_graphs('graph_raw_0.94.dgl')
      g = graphs[0] # Typically, load_graphs returns a list of graphs
      
  • graph_with_properties_0.94.dgl:
    • Meaning: A DGL graph object for $\tau=0.94$ that includes node and/or edge properties directly embedded within the graph structure. This is convenient for direct use in DGL-based graph neural network models.
    • How to Load: Same as other .dgl files using dgl.load_graphs(). Node features can be accessed via g.ndata and edge features via g.edata.

B. Node Property Files: These CSV files provide various features and metadata for each database node.

  • database_embeddings.pt:
    • Meaning: A PyTorch (.pt) file containing the pre-computed embedding vectors for each database. These are crucial semantic features.
    • How to Load: Use torch.load('database_embeddings.pt') in Python with PyTorch installed.
  • node_structural_properties.csv:
    • Meaning: Contains structural characteristics of each database (e.g., number of tables, columns, foreign key counts).
  • column_cardinality.csv:
    • Meaning: Statistics on the number of unique values (cardinality) for columns within each database.
  • column_entropy.csv:
    • Meaning: Entropy values calculated for columns within each database, indicating data diversity.
  • data_volume.csv:
    • Meaning: Information regarding the size or volume of data in each database (e.g., total rows, file size).
  • cluster_assignments_dim2_sz100_msNone.csv:
    • Meaning: Cluster labels assigned to each database after dimensionality reduction (e.g., t-SNE) and clustering (e.g., HDBSCAN).
  • community_assignment.csv:
    • Meaning: Community labels assigned to each database based on graph community detection algorithms (e.g., Louvain).
  • tsne_embeddings_dim2.csv:
    • Meaning: 2-dimensional projection of database embeddings using t-SNE, typically used for visualization.
  • How to Load CSV Node Properties: Use pandas.read_csv() in Python. The resulting DataFrames can be merged or used to assign attributes to nodes in a graph object.

C. Edge Property Files: These CSV files provide features for the relationships (edges) between databases.

  • edge_embed_sim.csv:
    • Meaning: Contains the embedding-based similarity scores (EmbedSim) for connected database pairs. This might be redundant if confidence is in filtered_edges_..._with_confidence.csv but could be a global list of all pairwise similarities above a certain initial cutoff.
  • edge_structural_properties_GED_0.94.csv:
    • Meaning: Contains structural similarity metrics (potentially Graph Edit Distance or related measures) for edges in the graph constructed with $\tau=0.94$.
  • How to Load CSV Edge Properties: Use pandas.read_csv(). These can be used to assign attributes to edges in a graph object.

D. Other Analysis Files:

  • distdiv_results.csv:
    • Meaning: Likely contains results from a specific distance or diversity analysis performed on the databases or their embeddings. The exact nature would be detailed in the paper or accompanying documentation.
    • How to Load: As a CSV file using pandas.

Detailed instructions on the specific schemas of these CSV files, precise content of .pt and .dgl files, and example usage scripts will be provided in the code repository and full dataset documentation upon release.

Downloads last month
0