Datasets:

Languages:
English
DOI:
License:
arminmehrabian commited on
Commit
79ce32b
·
1 Parent(s): 87308e5

Update NASA Knowledge Graph Dataset to version v1.1.0

Browse files
Files changed (4) hide show
  1. README.md +86 -442
  2. graph.cypher +2 -2
  3. graph.graphml +2 -2
  4. graph.json +2 -2
README.md CHANGED
@@ -12,27 +12,35 @@ tags:
12
 
13
  # NASA Knowledge Graph Dataset
14
 
 
 
 
 
 
 
 
15
  ## Dataset Summary
16
 
17
- The **NASA Knowledge Graph Dataset** is an expansive graph-based dataset designed to integrate and interconnect information about satellite datasets, scientific publications, instruments, platforms, projects, data centers, and science keywords. This knowledge graph is particularly focused on datasets managed by NASA's Distributed Active Archive Centers (DAACs), which are NASA's data repositories responsible for archiving and distributing scientific data. In addition to NASA DAACs, the graph includes datasets from 184 data providers worldwide, including various government agencies and academic institutions.
18
 
19
- The primary goal of the NASA Knowledge Graph is to bridge scientific publications with the datasets they reference, facilitating deeper insights and research opportunities within NASA's scientific and data ecosystem. By organizing these interconnections within a graph structure, this dataset enables advanced analyses, such as discovering influential datasets, understanding research trends, and exploring scientific collaborations.
 
 
20
 
21
  ## Data Integrity
22
 
23
  Each file in the dataset has a SHA-256 checksum to verify its integrity:
24
 
25
- | File Name | SHA-256 Checksum |
26
- |----------------------------|---------------------------------------------------------------------------|
27
- | `graph.cypher` | `d78f7b166a86be3ceec16db75a1575dac95249af188b1d2e2adab7388a8e654a` |
28
- | `graph.graphml` | `a781e358b338a181db5081a50127325febab67dcb283cb4124c877bf06de439e` |
29
- | `graph.json` | `4794fd070953b3544ba3c841f13155f1cf8a1ca9abd1b240bbfaf482ce2cde32` |
30
 
31
  ### Verification
32
 
33
- To verify the integrity of each file, calculate its SHA-256 checksum and compare it with the hashes provided above.
34
 
35
- You can use the following Python code to calculate the SHA-256 checksum:
36
  ```python
37
  import hashlib
38
 
@@ -42,483 +50,119 @@ def calculate_sha256(filepath):
42
  for byte_block in iter(lambda: f.read(4096), b""):
43
  sha256_hash.update(byte_block)
44
  return sha256_hash.hexdigest()
45
- ```
46
-
47
-
48
- ## Dataset Structure
49
-
50
- ### Nodes and Properties
51
-
52
- The knowledge graph consists of seven main node types, each representing a different entity within NASA's ecosystem:
53
-
54
- #### 1. Dataset
55
-
56
- - **Description**: Represents satellite datasets, particularly those managed by NASA DAACs, along with datasets from other governmental and academic data providers. These nodes contain metadata and attributes specific to each dataset.
57
- - **Properties**:
58
- - `globalId` (String): Unique identifier for the dataset.
59
- - `doi` (String): Digital Object Identifier.
60
- - `shortName` (String): Abbreviated name of the dataset.
61
- - `longName` (String): Full name of the dataset.
62
- - `abstract` (String): Brief summary of the dataset.
63
- - `cmrId` (String): Common Metadata Repository ID.
64
- - `daac` (String): Distributed Active Archive Center associated with the dataset.
65
- - `temporalFrequency` (String): Frequency of data collection (e.g., daily, monthly).
66
- - `temporalExtentStart` (Date): Start date of the dataset's temporal coverage.
67
- - `temporalExtentEnd` (Date): End date of the dataset's temporal coverage.
68
- - `nwCorner` (Geo-Coordinate): Northwest corner coordinate of spatial coverage.
69
- - `seCorner` (Geo-Coordinate): Southeast corner coordinate of spatial coverage.
70
- - `landingPageUrl` (URL): Webpage link to the dataset.
71
- - `pagerank_global` (Float): PageRank score calculated using the natural graph direction.
72
- - `fastrp_embedding_with_labels` (List<Float>): FastRP embedding vector considering node labels.
73
-
74
- #### 2. Publication
75
-
76
- - **Description**: This node type captures publications that reference or use datasets, particularly those from NASA DAACs and other included data providers. By linking datasets and publications, this node type enables analysis of scientific impact and research usage of NASA’s datasets.
77
- - **Properties**:
78
- - `globalId` (String): Unique identifier for the publication.
79
- - `DOI` (String): Digital Object Identifier.
80
- - `title` (String): Title of the publication.
81
- - `abstract` (String): Summary of the publication's content.
82
- - `authors` (List<String>): List of authors.
83
- - `year` (Integer): Year of publication.
84
- - `pagerank_global` (Float): PageRank score.
85
- - `fastrp_embedding_with_labels` (List<Float>): FastRP embedding vector.
86
-
87
- #### 3. ScienceKeyword
88
-
89
- - **Description**: Represents scientific keywords used to classify datasets and publications. Keywords provide topical context and aid in identifying research trends and related areas.
90
- - **Properties**:
91
- - `globalId` (String): Unique identifier.
92
- - `name` (String): Name of the science keyword.
93
- - `pagerank_global` (Float): PageRank score.
94
- - `fastrp_embedding_with_labels` (List<Float>): FastRP embedding vector.
95
-
96
- #### 4. Instrument
97
-
98
- - **Description**: Instruments used in data collection, often mounted on platforms like satellites. Instruments are linked to the datasets they help generate.
99
- - **Properties**:
100
- - `globalId` (String): Unique identifier.
101
- - `shortName` (String): Abbreviated name.
102
- - `longName` (String): Full name.
103
- - `pagerank_global` (Float): PageRank score.
104
- - `fastrp_embedding_with_labels` (List<Float>): FastRP embedding vector.
105
-
106
- #### 5. Platform
107
-
108
- - **Description**: Platforms, such as satellites or airborne vehicles, that host instruments for data collection.
109
- - **Properties**:
110
- - `globalId` (String): Unique identifier.
111
- - `shortName` (String): Abbreviated name.
112
- - `longName` (String): Full name.
113
- - `Type` (String): Type of platform.
114
- - `pagerank_global` (Float): PageRank score.
115
- - `fastrp_embedding_with_labels` (List<Float>): FastRP embedding vector.
116
-
117
- #### 6. Project
118
-
119
- - **Description**: NASA projects associated with datasets, often indicating a funding source or collaborative initiative for data collection and research.
120
- - **Properties**:
121
- - `globalId` (String): Unique identifier.
122
- - `shortName` (String): Abbreviated name.
123
- - `longName` (String): Full name.
124
- - `pagerank_global` (Float): PageRank score.
125
- - `fastrp_embedding_with_labels` (List<Float>): FastRP embedding vector.
126
-
127
- #### 7. DataCenter
128
-
129
- - **Description**: Represents data centers, primarily NASA DAACs, as well as other data providers in academia and government who manage and distribute datasets.
130
- - **Properties**:
131
- - `globalId` (String): Unique identifier.
132
- - `shortName` (String): Abbreviated name.
133
- - `longName` (String): Full name.
134
- - `url` (URL): Website of the data center.
135
- - `pagerank_global` (Float): PageRank score.
136
- - `fastrp_embedding_with_labels` (List<Float>): FastRP embedding vector.
137
-
138
- ### Relationships
139
-
140
- The knowledge graph includes several relationship types that define how nodes are interconnected.
141
-
142
- #### 1. HAS_DATASET
143
-
144
- - **Description**: Connects a `DataCenter` to its `Dataset(s)`.
145
- - **Properties**: None.
146
-
147
- #### 2. OF_PROJECT
148
-
149
- - **Description**: Links a `Dataset` to a `Project`.
150
- - **Properties**: None.
151
-
152
- #### 3. HAS_PLATFORM
153
-
154
- - **Description**: Associates a `Dataset` with a `Platform`.
155
- - **Properties**: None.
156
-
157
- #### 4. HAS_INSTRUMENT
158
-
159
- - **Description**: Connects a `Platform` to an `Instrument`.
160
- - **Properties**: None.
161
-
162
- #### 5. HAS_SCIENCEKEYWORD
163
 
164
- - **Description**: Links a `Dataset` to a `ScienceKeyword`.
165
- - **Properties**: None.
166
-
167
- #### 6. HAS_SUBCATEGORY
168
-
169
- - **Description**: Defines hierarchical relationships between `ScienceKeyword` nodes.
170
- - **Properties**: None.
171
-
172
- #### 7. CITES
173
-
174
- - **Description**: Represents citation relationships between `Publication` nodes, indicating how research builds on previous work.
175
- - **Properties**: None.
176
 
177
- #### 8. HAS_APPLIED_RESEARCH_AREA
178
 
179
- - **Description**: Associates a `Publication` with a `ScienceKeyword`.
180
- - **Properties**: None.
181
 
182
- ## Statistics
183
 
184
- # Data Statistics
 
 
 
185
 
186
- ## Total Counts
187
- | Type | Count |
188
- |----------------------|---------|
189
- | **Total Nodes** | 135,764 |
190
- | **Total Relationships** | 365,857 |
191
 
192
- ## Node Label Counts
193
  | Node Label | Count |
194
  |-------------------|----------|
195
- | Dataset | 6,390 |
196
- | DataCenter | 184 |
197
- | Project | 333 |
198
- | Platform | 442 |
199
- | Instrument | 867 |
200
  | ScienceKeyword | 1,609 |
201
- | Publication | 125,939 |
 
 
 
 
 
202
 
203
- ## Relationship Label Counts
204
  | Relationship Label | Count |
205
  |---------------------------|-----------|
206
- | HAS_DATASET | 9,017 |
207
- | OF_PROJECT | 6,049 |
208
- | HAS_PLATFORM | 9,884 |
209
- | HAS_INSTRUMENT | 2,469 |
 
 
 
 
210
  | HAS_SUBCATEGORY | 1,823 |
211
- | HAS_SCIENCEKEYWORD | 20,436 |
212
- | CITES | 208,670 |
213
- | HAS_APPLIED_RESEARCH_AREA| 89,039 |
214
- | USES_DATASET | 18,470 |
215
 
 
216
 
217
- ## Data Formats
218
-
219
- The Knowledge Graph Dataset is available in three formats:
220
-
221
- ### 1. JSON
222
-
223
- - **File**: `graph.json`
224
- - **Description**: A hierarchical data format representing nodes and relationships. Each node includes its properties, such as `globalId`, `doi`, and `pagerank_global`.
225
- - **Usage**: Suitable for web applications and APIs, and for use cases where hierarchical data structures are preferred.
226
-
227
- #### Loading the JSON Format
228
 
229
- To load the JSON file into a graph database using Python and multiprocessing you can using the following script:
230
 
231
- ```python
232
- import json
233
- from tqdm import tqdm
234
- from collections import defaultdict
235
- from multiprocessing import Pool, cpu_count
236
- from neo4j import GraphDatabase
 
237
 
238
- # Batch size for processing
239
- BATCH_SIZE = 100
240
-
241
- # Neo4j credentials (replace with environment variables or placeholders)
242
- NEO4J_URI = "bolt://<your-neo4j-host>:<port>" # e.g., "bolt://localhost:7687"
243
- NEO4J_USER = "<your-username>"
244
- NEO4J_PASSWORD = "<your-password>"
245
-
246
-
247
- def ingest_data(file_path):
248
- # Initialize counters and label trackers
249
- node_label_counts = defaultdict(int)
250
- relationship_label_counts = defaultdict(int)
251
- node_count = 0
252
- relationship_count = 0
253
-
254
- with open(file_path, "r") as f:
255
- nodes = []
256
- relationships = []
257
-
258
- # Read and categorize nodes and relationships, and count labels
259
- for line in tqdm(f, desc="Reading JSON Lines"):
260
- obj = json.loads(line.strip())
261
- if obj["type"] == "node":
262
- nodes.append(obj)
263
- node_count += 1
264
- for label in obj["labels"]:
265
- node_label_counts[label] += 1
266
- elif obj["type"] == "relationship":
267
- relationships.append(obj)
268
- relationship_count += 1
269
- relationship_label_counts[obj["label"]] += 1
270
-
271
- # Print statistics
272
- print("\n=== Data Statistics ===")
273
- print(f"Total Nodes: {node_count}")
274
- print(f"Total Relationships: {relationship_count}")
275
- print("\nNode Label Counts:")
276
- for label, count in node_label_counts.items():
277
- print(f" {label}: {count}")
278
- print("\nRelationship Label Counts:")
279
- for label, count in relationship_label_counts.items():
280
- print(f" {label}: {count}")
281
- print("=======================")
282
-
283
- # Multiprocess node ingestion
284
- print("Starting Node Ingestion...")
285
- node_batches = [nodes[i : i + BATCH_SIZE] for i in range(0, len(nodes), BATCH_SIZE)]
286
- with Pool(processes=cpu_count()) as pool:
287
- list(
288
- tqdm(
289
- pool.imap(ingest_nodes_batch, node_batches),
290
- total=len(node_batches),
291
- desc="Ingesting Nodes",
292
- )
293
- )
294
-
295
- # Multiprocess relationship ingestion
296
- print("Starting Relationship Ingestion...")
297
- relationship_batches = [
298
- relationships[i : i + BATCH_SIZE]
299
- for i in range(0, len(relationships), BATCH_SIZE)
300
- ]
301
- with Pool(processes=cpu_count()) as pool:
302
- list(
303
- tqdm(
304
- pool.imap(ingest_relationships_batch, relationship_batches),
305
- total=len(relationship_batches),
306
- desc="Ingesting Relationships",
307
- )
308
- )
309
-
310
-
311
- def ingest_nodes_batch(batch):
312
- with GraphDatabase.driver(NEO4J_URI, auth=(NEO4J_USER, NEO4J_PASSWORD)) as driver:
313
- with driver.session() as session:
314
- for node in batch:
315
- try:
316
- label = node["labels"][0] # Assumes a single label per node
317
- query = f"""
318
- MERGE (n:{label} {{globalId: $globalId}})
319
- SET n += $properties
320
- """
321
- session.run(
322
- query,
323
- globalId=node["properties"]["globalId"],
324
- properties=node["properties"],
325
- )
326
- except Exception as e:
327
- print(
328
- f"Error ingesting node with globalId {node['properties']['globalId']}: {e}"
329
- )
330
-
331
-
332
- def ingest_relationships_batch(batch):
333
- with GraphDatabase.driver(NEO4J_URI, auth=(NEO4J_USER, NEO4J_PASSWORD)) as driver:
334
- with driver.session() as session:
335
- for relationship in batch:
336
- try:
337
- rel_type = relationship[
338
- "label"
339
- ] # Use the label for the relationship
340
- query = f"""
341
- MATCH (start {{globalId: $start_globalId}})
342
- MATCH (end {{globalId: $end_globalId}})
343
- MERGE (start)-[r:{rel_type}]->(end)
344
- """
345
- session.run(
346
- query,
347
- start_globalId=relationship["start"]["properties"]["globalId"],
348
- end_globalId=relationship["end"]["properties"]["globalId"],
349
- )
350
- except Exception as e:
351
- print(
352
- f"Error ingesting relationship with label {relationship['label']}: {e}"
353
- )
354
-
355
-
356
- if __name__ == "__main__":
357
- # Path to the JSON file
358
- JSON_FILE_PATH = "<path-to-your-graph.json>"
359
-
360
- # Run the ingestion process
361
- ingest_data(JSON_FILE_PATH)
362
 
363
- ```
364
 
365
- ### 2. GraphML
366
 
367
- - **File**: `graph.graphml`
368
- - **Description**: An XML-based format well-suited for complex graph structures and metadata-rich representations.
369
- - **Usage**: Compatible with graph visualization and analysis tools, including Gephi, Cytoscape, and databases that support GraphML import.
370
 
371
- #### Loading the GraphML Format
 
 
372
 
373
- To import the GraphML file into a graph database with APOC support, use the following command:
374
 
 
375
  ```cypher
376
- CALL apoc.import.graphml("path/to/graph.graphml", {readLabels: true})
 
377
  ```
378
 
379
- ### 3. Cypher
380
-
381
- - **File**: `graph.cypher`
382
- - **Description**: A series of Cypher commands to recreate the knowledge graph structure.
383
- - **Usage**: Useful for recreating the graph in any Cypher-compatible graph database.
384
-
385
- #### Loading the Cypher Format
386
-
387
- To load the Cypher script, execute it directly using a command-line interface for your graph database:
388
-
389
- ```bash
390
- neo4j-shell -file path/to/graph.cypher
391
- ```
392
-
393
- ### 4. Loading the Knowledge Graph into PyTorch Geometric (PyG)
394
-
395
- This knowledge graph can be loaded into PyG (PyTorch Geometric) for further processing, analysis, or model training. Below is an example script that shows how to load the JSON data into a PyG-compatible `HeteroData` object.
396
 
397
- The script first reads the JSON data, processes nodes and relationships, and then loads everything into a `HeteroData` object for use with PyG.
 
 
 
398
 
399
- ```python
400
- import json
401
- import torch
402
- from torch_geometric.data import HeteroData
403
- from collections import defaultdict
404
-
405
- # Load JSON data from file
406
- file_path = "path/to/graph.json" # Replace with your actual file path
407
- graph_data = []
408
- with open(file_path, "r") as f:
409
- for line in f:
410
- try:
411
- graph_data.append(json.loads(line))
412
- except json.JSONDecodeError as e:
413
- print(f"Error decoding JSON line: {e}")
414
- continue
415
-
416
- # Initialize HeteroData object
417
- data = HeteroData()
418
-
419
- # Mapping for node indices per node type
420
- node_mappings = defaultdict(dict)
421
-
422
- # Temporary storage for properties to reduce concatenation cost
423
- node_properties = defaultdict(lambda: defaultdict(list))
424
- edge_indices = defaultdict(lambda: defaultdict(list))
425
-
426
- # Process each item in the loaded JSON data
427
- for item in graph_data:
428
- if item['type'] == 'node':
429
- node_type = item['labels'][0] # Assuming first label is the node type
430
- node_id = item['id']
431
- properties = item['properties']
432
-
433
- # Store the node index mapping
434
- node_index = len(node_mappings[node_type])
435
- node_mappings[node_type][node_id] = node_index
436
-
437
- # Store properties temporarily by type
438
- for key, value in properties.items():
439
- if isinstance(value, list) and all(isinstance(v, (int, float)) for v in value):
440
- node_properties[node_type][key].append(torch.tensor(value, dtype=torch.float))
441
- elif isinstance(value, (int, float)):
442
- node_properties[node_type][key].append(torch.tensor([value], dtype=torch.float))
443
- else:
444
- node_properties[node_type][key].append(value) # non-numeric properties as lists
445
-
446
- elif item['type'] == 'relationship':
447
- start_type = item['start']['labels'][0]
448
- end_type = item['end']['labels'][0]
449
- start_id = item['start']['id']
450
- end_id = item['end']['id']
451
- edge_type = item['label']
452
-
453
- # Map start and end node indices
454
- start_idx = node_mappings[start_type][start_id]
455
- end_idx = node_mappings[end_type][end_id]
456
-
457
- # Append to edge list
458
- edge_indices[(start_type, edge_type, end_type)]['start'].append(start_idx)
459
- edge_indices[(start_type, edge_type, end_type)]['end'].append(end_idx)
460
-
461
- # Finalize node properties by batch processing
462
- for node_type, properties in node_properties.items():
463
- data[node_type].num_nodes = len(node_mappings[node_type])
464
- for key, values in properties.items():
465
- if isinstance(values[0], torch.Tensor):
466
- data[node_type][key] = torch.stack(values)
467
- else:
468
- data[node_type][key] = values # Keep non-tensor properties as lists
469
-
470
- # Finalize edge indices in bulk
471
- for (start_type, edge_type, end_type), indices in edge_indices.items():
472
- edge_index = torch.tensor([indices['start'], indices['end']], dtype=torch.long)
473
- data[start_type, edge_type, end_type].edge_index = edge_index
474
-
475
- # Display statistics for verification
476
- print("Nodes and Properties:")
477
- for node_type in data.node_types:
478
- print(f"\nNode Type: {node_type}")
479
- print(f"Number of Nodes: {data[node_type].num_nodes}")
480
- for key, value in data[node_type].items():
481
- if key != 'num_nodes':
482
- if isinstance(value, torch.Tensor):
483
- print(f" - {key}: {value.shape}")
484
- else:
485
- print(f" - {key}: {len(value)} items (non-numeric)")
486
-
487
- print("\nEdges and Types:")
488
- for edge_type in data.edge_types:
489
- edge_index = data[edge_type].edge_index
490
- print(f"Edge Type: {edge_type} - Number of Edges: {edge_index.size(1)} - Shape: {edge_index.shape}")
491
 
 
492
  ```
493
 
494
  ---
495
 
496
  ## Citation
497
 
498
- Please cite the dataset as follows:
499
-
500
- **NASA Goddard Earth Sciences Data and Information Services Center (GES-DISC).** (2024). *Knowledge Graph of NASA Earth Observations Satellite Datasets and Related Research Publications* [Data set]. DOI: [10.57967/hf/3463](https://doi.org/10.57967/hf/3463)
501
 
502
  ### BibTeX
503
  ```bibtex
504
- @misc {nasa_goddard_earth_sciences_data_and_information_services_center__(ges-disc)_2024,
505
- author = { {NASA Goddard Earth Sciences Data and Information Services Center (GES-DISC)} },
506
- title = { nasa-eo-knowledge-graph },
507
- year = 2024,
508
- url = { https://huggingface.co/datasets/nasa-gesdisc/nasa-eo-knowledge-graph },
509
- doi = { 10.57967/hf/3463 },
510
- publisher = { Hugging Face }
511
  }
512
  ```
513
 
514
- ## References
515
- For details on the process of collecting these publications, please refer to:
516
-
517
- Gerasimov, I., Savtchenko, A., Alfred, J., Acker, J., Wei, J., & KC, B. (2024). *Bridging the Gap: Enhancing Prominence and Provenance of NASA Datasets in Research Publications.* Data Science Journal, 23(1). DOI: [10.5334/dsj-2024-001](https://doi.org/10.5334/dsj-2024-001)
518
-
519
-
520
- For any questions or further information, please contact:
521
 
 
 
522
  - Armin Mehrabian: [[email protected]](mailto:[email protected])
523
  - Irina Gerasimov: [[email protected]](mailto:[email protected])
524
  - Kendall Gilbert: [[email protected]](mailto:[email protected])
 
12
 
13
  # NASA Knowledge Graph Dataset
14
 
15
+ ## 🚀 **Update Notice**
16
+ **Version v1.1.0 - Updated on: 2025-02-19**
17
+
18
+ This is the updated version of the NASA Knowledge Graph Dataset, reflecting new datasets, publications, and relationships as of February 2025. This version includes improved node connectivity, updated embeddings, and refined dataset linking.
19
+
20
+ ---
21
+
22
  ## Dataset Summary
23
 
24
+ The **NASA Knowledge Graph Dataset** is an expansive graph-based dataset designed to integrate and interconnect information about satellite datasets, scientific publications, instruments, platforms, projects, data centers, and science keywords. This knowledge graph focuses on datasets managed by NASA's Distributed Active Archive Centers (DAACs), along with datasets from 184+ data providers worldwide, including government agencies and academic institutions.
25
 
26
+ The graph bridges scientific publications with the datasets they reference, enabling advanced analyses such as discovering influential datasets, understanding research trends, and exploring scientific collaborations.
27
+
28
+ ---
29
 
30
  ## Data Integrity
31
 
32
  Each file in the dataset has a SHA-256 checksum to verify its integrity:
33
 
34
+ | File Name | SHA-256 Checksum |
35
+ |----------------|---------------------------------------------------------------------------|
36
+ | `graph.cypher` | `6aef9d433ea1179f0b006e9c402ca48d487e78cb193d4535e920d61c078cbae3` |
37
+ | `graph.graphml`| `3b653606e7ed6ee5e4367d2b4e8c4800240e1c9dc9520eec5803a2f3204ebe5a` |
38
+ | `graph.json` | `8d49fc801f87cb43dfdadb3dc77a6e1b2b1e7e8044b46883f398b69a71dd3ac4` |
39
 
40
  ### Verification
41
 
42
+ To verify file integrity, calculate the SHA-256 checksum and compare it with the provided hashes:
43
 
 
44
  ```python
45
  import hashlib
46
 
 
50
  for byte_block in iter(lambda: f.read(4096), b""):
51
  sha256_hash.update(byte_block)
52
  return sha256_hash.hexdigest()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ print(calculate_sha256("graph.json"))
55
+ ```
 
 
 
 
 
 
 
 
 
 
56
 
57
+ ---
58
 
59
+ ## Dataset Statistics
 
60
 
61
+ ### Total Counts
62
 
63
+ | Type | Count |
64
+ |----------------------|-----------|
65
+ | **Total Nodes** | 145,047 |
66
+ | **Total Relationships** | 406,444 |
67
 
68
+ ### Node Label Counts
 
 
 
 
69
 
 
70
  | Node Label | Count |
71
  |-------------------|----------|
72
+ | Publication | 134,724 |
73
+ | Dataset | 6,817 |
 
 
 
74
  | ScienceKeyword | 1,609 |
75
+ | Instrument | 897 |
76
+ | Platform | 451 |
77
+ | Project | 352 |
78
+ | DataCenter | 197 |
79
+
80
+ ### Relationship Label Counts
81
 
 
82
  | Relationship Label | Count |
83
  |---------------------------|-----------|
84
+ | CITES | 208,429 |
85
+ | HAS_APPLIEDRESEARCHAREA | 119,695 |
86
+ | USES_DATASET | 25,814 |
87
+ | HAS_SCIENCEKEYWORD | 21,559 |
88
+ | HAS_PLATFORM | 10,393 |
89
+ | HAS_DATASET | 9,830 |
90
+ | OF_PROJECT | 6,375 |
91
+ | HAS_INSTRUMENT | 2,526 |
92
  | HAS_SUBCATEGORY | 1,823 |
 
 
 
 
93
 
94
+ ---
95
 
96
+ ## Dataset Structure
 
 
 
 
 
 
 
 
 
 
97
 
98
+ The knowledge graph consists of seven main node types representing different entities within NASA's ecosystem:
99
 
100
+ 1. **Dataset:** Satellite datasets managed by NASA DAACs and other providers.
101
+ 2. **Publication:** Research publications referencing datasets.
102
+ 3. **ScienceKeyword:** Keywords classifying datasets and publications.
103
+ 4. **Instrument:** Instruments used for data collection.
104
+ 5. **Platform:** Satellites and airborne platforms hosting instruments.
105
+ 6. **Project:** NASA projects associated with datasets.
106
+ 7. **DataCenter:** DAACs and other data providers.
107
 
108
+ **Relationships:** Nodes are connected by relationships such as `CITES`, `USES_DATASET`, `HAS_PLATFORM`, and `HAS_SCIENCEKEYWORD`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
 
110
+ ---
111
 
112
+ ## Data Formats
113
 
114
+ The dataset is available in three formats:
 
 
115
 
116
+ 1. **JSON:** Hierarchical format suitable for APIs and web applications.
117
+ 2. **GraphML:** XML-based format compatible with graph analysis tools.
118
+ 3. **Cypher:** Cypher commands to recreate the graph structure.
119
 
120
+ ### Loading the Dataset
121
 
122
+ **Using Cypher:**
123
  ```cypher
124
+ // Import GraphML
125
+ CALL apoc.import.graphml("graph.graphml", {readLabels: true});
126
  ```
127
 
128
+ **Using Python (Neo4j Driver):**
129
+ ```python
130
+ from neo4j import GraphDatabase
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
132
+ def load_data():
133
+ uri = "bolt://localhost:7687"
134
+ auth = ("neo4j", "password")
135
+ driver = GraphDatabase.driver(uri, auth=auth)
136
 
137
+ with driver.session() as session:
138
+ session.run("CALL apoc.import.json('graph.json')")
139
+ driver.close()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
 
141
+ load_data()
142
  ```
143
 
144
  ---
145
 
146
  ## Citation
147
 
148
+ **NASA Goddard Earth Sciences Data and Information Services Center (GES-DISC).** (2025). *Knowledge Graph of NASA Earth Observations Satellite Datasets and Related Research Publications* [Data set]. DOI: [10.57967/hf/3463](https://doi.org/10.57967/hf/3463)
 
 
149
 
150
  ### BibTeX
151
  ```bibtex
152
+ @misc{nasa_knowledge_graph_2025,
153
+ author = {NASA Goddard Earth Sciences Data and Information Services Center (GES-DISC)},
154
+ title = {NASA Knowledge Graph Dataset (Version 2.0)},
155
+ year = 2025,
156
+ url = {https://huggingface.co/datasets/nasa-gesdisc/nasa-eo-knowledge-graph},
157
+ doi = {10.57967/hf/3463},
158
+ publisher = {Hugging Face}
159
  }
160
  ```
161
 
162
+ ---
 
 
 
 
 
 
163
 
164
+ ## Contact
165
+ For inquiries, please contact:
166
  - Armin Mehrabian: [[email protected]](mailto:[email protected])
167
  - Irina Gerasimov: [[email protected]](mailto:[email protected])
168
  - Kendall Gilbert: [[email protected]](mailto:[email protected])
graph.cypher CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d78f7b166a86be3ceec16db75a1575dac95249af188b1d2e2adab7388a8e654a
3
- size 1117381849
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6aef9d433ea1179f0b006e9c402ca48d487e78cb193d4535e920d61c078cbae3
3
+ size 297709550
graph.graphml CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a781e358b338a181db5081a50127325febab67dcb283cb4124c877bf06de439e
3
- size 1094260229
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b653606e7ed6ee5e4367d2b4e8c4800240e1c9dc9520eec5803a2f3204ebe5a
3
+ size 320845858
graph.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4794fd070953b3544ba3c841f13155f1cf8a1ca9abd1b240bbfaf482ce2cde32
3
- size 6377999766
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d49fc801f87cb43dfdadb3dc77a6e1b2b1e7e8044b46883f398b69a71dd3ac4
3
+ size 1504091353