Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -11,4 +11,94 @@ tags:
|
|
11 |
- finance
|
12 |
size_categories:
|
13 |
- 10B<n<100B
|
14 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
- finance
|
12 |
size_categories:
|
13 |
- 10B<n<100B
|
14 |
+
---
|
15 |
+
|
16 |
+
# WikiDBGraph Dataset
|
17 |
+
|
18 |
+
This document provides an overview of the datasets associated with the research paper on WikiDBGraph, a large-scale graph of interconnected relational databases.
|
19 |
+
|
20 |
+
**Description:**
|
21 |
+
WikiDBGraph is a novel, large-scale graph where each node represents a relational database, and edges signify identified correlations or similarities between these databases. It is constructed from 100,000 real-world-like databases derived from Wikidata. The graph is enriched with a comprehensive set of node (database) and edge (inter-database relationship) properties, categorized into structural, semantic, and statistical features.
|
22 |
+
|
23 |
+
**Source:**
|
24 |
+
WikiDBGraph is derived from the WikiDBs corpus (see below). The inter-database relationships (edges) are primarily identified using a machine learning model trained to predict database similarity based on their schema embeddings, significantly expanding upon the explicitly known links in the source data.
|
25 |
+
|
26 |
+
**Key Characteristics:**
|
27 |
+
* **Nodes:** 100,000 relational databases.
|
28 |
+
* **Edges:** Millions of identified inter-database relationships (the exact number depends on the similarity threshold $\tau$ used for graph construction).
|
29 |
+
* **Node Properties:** Include database-level structural details (e.g., number of tables, columns, foreign keys), semantic information (e.g., pre-computed embeddings, topic categories from Wikidata, community IDs), and statistical measures (e.g., database size, total rows, column cardinalities, column entropies).
|
30 |
+
* **Edge Properties:** Include structural similarity (e.g., Jaccard index on table/column names, Graph Edit Distance based metrics), semantic relatedness (e.g., cosine similarity of embeddings, prediction confidence), and statistical relationships (e.g., KL divergence of shared column distributions).
|
31 |
+
|
32 |
+
**Usage in this Research:**
|
33 |
+
WikiDBGraph is the primary contribution of this paper. It is used to:
|
34 |
+
* Demonstrate a methodology for identifying and representing inter-database relationships at scale.
|
35 |
+
* Provide a rich resource for studying the landscape of interconnected databases.
|
36 |
+
* Serve as a foundation for experiments in collaborative learning, showcasing its utility in feature-overlap and instance-overlap scenarios.
|
37 |
+
|
38 |
+
**Availability:**
|
39 |
+
* **Dataset (WikiDBGraph):** The WikiDBGraph dataset, including the graph structure (edge lists for various thresholds $\tau$) and node/edge property files, will be made publicly available. (Please specify the URL here, e.g., Zenodo, GitHub, or your project website).
|
40 |
+
* **Code:** The code used for constructing WikiDBGraph from WikiDBs, generating embeddings, and running the experiments will be made publicly available. (Please specify the URL here, e.g., GitHub repository).
|
41 |
+
|
42 |
+
**License:**
|
43 |
+
* **WikiDBGraph Dataset:** Creative Commons Attribution 4.0 International (CC BY 4.0).
|
44 |
+
* **Associated Code:** Apache License 2.0.
|
45 |
+
|
46 |
+
**How to Use WikiDBGraph:**
|
47 |
+
The dataset is provided as a collection of files, categorized as follows:
|
48 |
+
|
49 |
+
**A. Graph Structure Files:**
|
50 |
+
These files define the connections (edges) between databases (nodes) for different similarity thresholds ($\tau$).
|
51 |
+
* **`filtered_edges_threshold_0.93.csv`**, **`filtered_edges_threshold_0.94.csv`**, **`filtered_edges_threshold_0.96.csv`**:
|
52 |
+
* **Meaning:** CSV files representing edge lists. Each row typically contains a pair of database IDs that are connected at the specified similarity threshold.
|
53 |
+
* **How to Load:** Use standard CSV parsing libraries (e.g., `pandas.read_csv()` in Python). These edge lists can then be used to construct graph objects in libraries like NetworkX (`nx.from_pandas_edgelist()`) or igraph.
|
54 |
+
* **`filtered_edges_0.94_with_confidence.csv`**:
|
55 |
+
* **Meaning:** An edge list in CSV format for $\tau=0.94$, which also includes the confidence score (similarity score) for each edge.
|
56 |
+
* **How to Load:** Similar to other CSV edge lists, using `pandas`. The confidence score can be used as an edge weight.
|
57 |
+
* **`graph_raw_0.93.dgl`**, **`graph_raw_0.94.dgl`**, **`graph_raw_0.96.dgl`**:
|
58 |
+
* **Meaning:** These are graph objects serialized in the Deep Graph Library (DGL) format for different thresholds. They likely contain the basic graph structure (nodes and edges).
|
59 |
+
* **How to Load `.dgl` files:** DGL provides functions to save and load graph objects. You would typically use:
|
60 |
+
```python
|
61 |
+
import dgl
|
62 |
+
# Example for loading a graph
|
63 |
+
graphs, _ = dgl.load_graphs('graph_raw_0.94.dgl')
|
64 |
+
g = graphs[0] # Typically, load_graphs returns a list of graphs
|
65 |
+
```
|
66 |
+
* **`graph_with_properties_0.94.dgl`**:
|
67 |
+
* **Meaning:** A DGL graph object for $\tau=0.94$ that includes node and/or edge properties directly embedded within the graph structure. This is convenient for direct use in DGL-based graph neural network models.
|
68 |
+
* **How to Load:** Same as other `.dgl` files using `dgl.load_graphs()`. Node features can be accessed via `g.ndata` and edge features via `g.edata`.
|
69 |
+
|
70 |
+
**B. Node Property Files:**
|
71 |
+
These CSV files provide various features and metadata for each database node.
|
72 |
+
* **`database_embeddings.pt`**:
|
73 |
+
* **Meaning:** A PyTorch (`.pt`) file containing the pre-computed embedding vectors for each database. These are crucial semantic features.
|
74 |
+
* **How to Load:** Use `torch.load('database_embeddings.pt')` in Python with PyTorch installed.
|
75 |
+
* **`node_structural_properties.csv`**:
|
76 |
+
* **Meaning:** Contains structural characteristics of each database (e.g., number of tables, columns, foreign key counts).
|
77 |
+
* **`column_cardinality.csv`**:
|
78 |
+
* **Meaning:** Statistics on the number of unique values (cardinality) for columns within each database.
|
79 |
+
* **`column_entropy.csv`**:
|
80 |
+
* **Meaning:** Entropy values calculated for columns within each database, indicating data diversity.
|
81 |
+
* **`data_volume.csv`**:
|
82 |
+
* **Meaning:** Information regarding the size or volume of data in each database (e.g., total rows, file size).
|
83 |
+
* **`cluster_assignments_dim2_sz100_msNone.csv`**:
|
84 |
+
* **Meaning:** Cluster labels assigned to each database after dimensionality reduction (e.g., t-SNE) and clustering (e.g., HDBSCAN).
|
85 |
+
* **`community_assignment.csv`**:
|
86 |
+
* **Meaning:** Community labels assigned to each database based on graph community detection algorithms (e.g., Louvain).
|
87 |
+
* **`tsne_embeddings_dim2.csv`**:
|
88 |
+
* **Meaning:** 2-dimensional projection of database embeddings using t-SNE, typically used for visualization.
|
89 |
+
* **How to Load CSV Node Properties:** Use `pandas.read_csv()` in Python. The resulting DataFrames can be merged or used to assign attributes to nodes in a graph object.
|
90 |
+
|
91 |
+
**C. Edge Property Files:**
|
92 |
+
These CSV files provide features for the relationships (edges) between databases.
|
93 |
+
* **`edge_embed_sim.csv`**:
|
94 |
+
* **Meaning:** Contains the embedding-based similarity scores (EmbedSim) for connected database pairs. This might be redundant if confidence is in `filtered_edges_..._with_confidence.csv` but could be a global list of all pairwise similarities above a certain initial cutoff.
|
95 |
+
* **`edge_structural_properties_GED_0.94.csv`**:
|
96 |
+
* **Meaning:** Contains structural similarity metrics (potentially Graph Edit Distance or related measures) for edges in the graph constructed with $\tau=0.94$.
|
97 |
+
* **How to Load CSV Edge Properties:** Use `pandas.read_csv()`. These can be used to assign attributes to edges in a graph object.
|
98 |
+
|
99 |
+
**D. Other Analysis Files:**
|
100 |
+
* **`distdiv_results.csv`**:
|
101 |
+
* **Meaning:** Likely contains results from a specific distance or diversity analysis performed on the databases or their embeddings. The exact nature would be detailed in the paper or accompanying documentation.
|
102 |
+
* **How to Load:** As a CSV file using `pandas`.
|
103 |
+
|
104 |
+
Detailed instructions on the specific schemas of these CSV files, precise content of `.pt` and `.dgl` files, and example usage scripts will be provided in the code repository and full dataset documentation upon release.
|