File size: 6,743 Bytes
48848ab
65bd8a2
48848ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65bd8a2
48848ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65bd8a2
48848ab
 
 
 
 
 
 
 
 
 
65bd8a2
48848ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65bd8a2
48848ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65bd8a2
48848ab
 
 
 
65bd8a2
48848ab
 
 
 
 
 
 
 
 
 
 
 
 
 
65bd8a2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
---
license: mit
task_categories:
- image-classification
- feature-extraction
- zero-shot-classification
language:
- en
tags:
- biodiversity
- biology
- computer-vision
- multimodal
- self-supervised-learning
- florida
- plants
pretty_name: DeepEarth Central Florida Native Plants
size_categories:
- 10K<n<100K
configs:
- config_name: default
  data_files:
    - split: train
      path: observations.parquet
    - split: test
      path: observations.parquet
---

# DeepEarth Central Florida Native Plants Dataset v0.2.0

## 🌿 Dataset Summary

A comprehensive multimodal dataset featuring **33,665 observations** of **232 native plant species** from Central Florida. This dataset combines citizen science observations with state-of-the-art vision and language embeddings for advancing multimodal self-supervised ecological intelligence research.

### Key Features
- 🌍 **Spatiotemporal Coverage**: Complete GPS coordinates and timestamps for all observations
- πŸ–ΌοΈ **Multimodal**: 31,136 observations with images, 7,113 with vision embeddings
- 🧬 **Language Embeddings**: DeepSeek-V3 embeddings for all 232 species
- πŸ‘οΈ **Vision Embeddings**: V-JEPA-2 self-supervised features (6.5M dimensions)
- πŸ“Š **Rigorous Splits**: Spatiotemporal train/test splits for robust evaluation

## πŸ“¦ Dataset Structure

```
observations.parquet         # Main dataset (500MB)
vision_index.parquet        # Vision embeddings index
vision_embeddings/          # Vision features (50GB total)
β”œβ”€β”€ embeddings_000000.parquet
β”œβ”€β”€ embeddings_000001.parquet
└── ... (159 files)
```

## πŸš€ Quick Start

```python
from datasets import load_dataset
import pandas as pd

# Load main dataset
dataset = load_dataset("deepearth/central-florida-plants")

# Access data
train_data = dataset['train']
print(f"Training samples: {len(train_data)}")
print(f"Features: {train_data.features}")

# Load vision embeddings (download required due to size)
vision_index = pd.read_parquet("vision_index.parquet")
vision_data = pd.read_parquet("vision_embeddings/embeddings_000000.parquet")
```

## πŸ“Š Data Fields

Each observation contains:

| Field | Type | Description |
|-------|------|-------------|
| `gbif_id` | int64 | Unique GBIF occurrence ID |
| `taxon_id` | string | GBIF taxon ID |
| `taxon_name` | string | Scientific species name |
| `latitude` | float | GPS latitude |
| `longitude` | float | GPS longitude |
| `year` | int | Observation year |
| `month` | int | Observation month |
| `day` | int | Observation day |
| `hour` | int | Observation hour (nullable) |
| `minute` | int | Observation minute (nullable) |
| `second` | int | Observation second (nullable) |
| `image_urls` | List[string] | URLs to observation images |
| `num_images` | int | Relative image number in GBIF occurrence |
| `has_vision` | bool | Vision embeddings available |
| `vision_file_indices` | List[int] | Indices to vision files |
| `language_embedding` | List[float] | 7,168-dim DeepSeek-V3 embedding |
| `split` | string | train/spatial_test/temporal_test |

## πŸ”„ Data Splits

The dataset uses rigorous spatiotemporal splits:

{
  "train": 30935,
  "temporal_test": 2730
}

- **Temporal Test**: All 2025 observations (future generalization)
- **Spatial Test**: 5 non-overlapping geographic regions
- **Train**: Remaining observations

## πŸ€– Embeddings

### Language Embeddings (DeepSeek-V3)
- **Dimensions**: 7,168
- **Source**: Scientific species descriptions
- **Coverage**: All 232 species

### Vision Embeddings (V-JEPA-2)
- **Dimensions**: 6,488,064 values per embedding
- **Structure**: 8 temporal frames Γ— 24Γ—24 spatial patches Γ— 1408 features
- **Model**: Vision Transformer Giant with self-supervised pretraining
- **Coverage**: 7,113 images
- **Storage**: Flattened arrays in parquet files (use provided utilities to reshape)

## πŸ’‘ Usage Examples

### Working with V-JEPA 2 Embeddings
```python
import numpy as np
import ast

# Load vision embedding
vision_df = pd.read_parquet("vision_embeddings/embeddings_000000.parquet")
row = vision_df.iloc[0]

# Reshape from flattened to 4D structure
embedding = row['embedding']
original_shape = ast.literal_eval(row['original_shape'])  # [4608, 1408]

# First to 2D: (4608 patches, 1408 features)
embedding_2d = embedding.reshape(original_shape)

# Then to 4D: (8 temporal, 24 height, 24 width, 1408 features)
embedding_4d = embedding_2d.reshape(8, 24, 24, 1408)

# Get specific temporal frame (0-7)
frame_0 = embedding_4d[0]  # Shape: (24, 24, 1408)

# Get mean embedding for image-level tasks
image_embedding = embedding_4d.mean(axis=(0, 1, 2))  # Shape: (1408,)
```

### Species Distribution Modeling
```python
# Filter observations for a specific species
species_data = dataset.filter(lambda x: x['taxon_name'] == 'Quercus virginiana')

# Use spatiotemporal data for distribution modeling
coords = [(d['latitude'], d['longitude']) for d in species_data]
```

### Multimodal Learning
```python
# Combine vision and language embeddings
for sample in dataset:
    if sample['has_vision']:
        lang_emb = sample['language_embedding']
        vision_idx = sample['vision_file_indices'][0]
        # Load corresponding vision embedding
        vision_emb = load_vision_embedding(vision_idx)
```

### Zero-shot Species Classification
```python
# Use language embeddings for zero-shot classification
species_embeddings = {
    species['taxon_name']: species['language_embedding']
    for species in dataset.unique('taxon_name')
}
```

## πŸ“„ License

This dataset is released under the **MIT License**.

## πŸ“š Citation

If you use this dataset, please cite:

```bibtex
@dataset{deepearth_cf_plants_2024,
  title={DeepEarth Central Florida Native Plants: A Multimodal Biodiversity Dataset},
  author={DeepEarth Team},
  year={2024},
  version={0.2.0},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/deepearth/central-florida-plants}
}
```

## 🌟 Acknowledgments

We thank all citizen scientists who contributed observations through iNaturalist and GBIF. This dataset was created as part of the DeepEarth initiative for multimodal self-supervised ecological intelligence research.

## πŸ”— Related Resources

- [DeepEarth Project](https://github.com/deepearth)
- [V-JEPA Model](https://ai.meta.com/vjepa/)
- [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3)
- [GBIF Portal](https://www.gbif.org)


## πŸ“ˆ Dataset Statistics

- **Total Size**: ~51 GB
- **Main Dataset**: 500 MB
- **Vision Embeddings**: 50 GB
- **Image URLs**: 31,136 total images referenced
- **Temporal Range**: 2019-2025
- **Geographic Scope**: Central Florida, USA

---
*Dataset prepared by the DeepEarth team for advancing multimodal self-supervised ecological intelligence research.*