ecodash commited on
Commit
87b039b
·
verified ·
1 Parent(s): ee98a76

Upload dataset_viewer.py with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_viewer.py +212 -152
dataset_viewer.py CHANGED
@@ -1,155 +1,215 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - feature-extraction
5
- language:
6
- - en
7
- tags:
8
- - biology
9
- - ecology
10
- - plants
11
- - embeddings
12
- - florida
13
- - biodiversity
14
- pretty_name: Central Florida Native Plants Language Embeddings
15
- size_categories:
16
- - n<1K
17
- ---
18
 
19
- # Central Florida Native Plants Language Embeddings
20
-
21
- This dataset contains language embeddings for 232 native plant species from Central Florida, extracted using the DeepSeek-V3 language model.
22
-
23
- ## Dataset Description
24
-
25
- - **Curated by:** DeepEarth Project
26
- - **Language(s):** English
27
- - **License:** CC-BY-4.0
28
-
29
- ### Dataset Summary
30
-
31
- This dataset provides pre-computed language embeddings for Central Florida plant species. Each species has been encoded using the prompt "Ecophysiology of {species_name}:" to capture semantic information about the plant's ecological characteristics.
32
-
33
- ## Dataset Structure
34
-
35
- ### Data Instances
36
-
37
- Each species is represented by:
38
- - A PyTorch file (`.pt`) containing a dictionary with embeddings and metadata
39
- - A CSV file containing the token mappings
40
-
41
- ### Embedding File Structure
42
-
43
- Each `.pt` file contains a dictionary with:
44
- - `mean_embedding`: Tensor of shape `[7168]` - mean-pooled embedding across all tokens
45
- - `token_embeddings`: Tensor of shape `[num_tokens, 7168]` - individual token embeddings
46
- - `species_name`: String - the species name
47
- - `taxon_id`: String - GBIF taxon ID
48
- - `num_tokens`: Integer - number of tokens (typically 18-20)
49
- - `embedding_stats`: Dictionary with embedding statistics
50
- - `timestamp`: String - when the embedding was created
51
-
52
- ### Token Mapping Structure
53
-
54
- Token mapping CSV files contain:
55
- - `position`: Token position in sequence
56
- - `token_id`: Token ID in model vocabulary
57
- - `token`: Token string representation
58
-
59
- ### Data Splits
60
-
61
- This dataset contains a single split with embeddings for all 232 species.
62
-
63
- ## Dataset Creation
64
-
65
- ### Model Information
66
-
67
- - **Model**: DeepSeek-V3-0324-UD-Q4_K_XL
68
- - **Parameters**: 671B (4.5-bit quantized GGUF format)
69
- - **Embedding Dimension**: 7168
70
- - **Context**: 2048 tokens
71
- - **Prompt Template**: "Ecophysiology of {species_name}:"
72
-
73
- ### Source Data
74
-
75
- Species names are based on GBIF (Global Biodiversity Information Facility) taxonomy for plants native to Central Florida.
76
-
77
- ## Usage
78
-
79
- ### Loading Embeddings
80
-
81
- ```python
82
  import torch
83
  import pandas as pd
84
- from huggingface_hub import hf_hub_download
85
-
86
- # Download a specific embedding
87
- repo_id = "deepearth/central_florida_native_plants"
88
- species_id = "2650927" # Example GBIF ID
89
-
90
- # Download embedding file
91
- embedding_path = hf_hub_download(
92
- repo_id=repo_id,
93
- filename=f"embeddings/{species_id}.pt",
94
- repo_type="dataset"
95
- )
96
-
97
- # Load embedding dictionary
98
- data = torch.load(embedding_path)
99
-
100
- # Access embeddings
101
- mean_embedding = data['mean_embedding'] # Shape: [7168]
102
- token_embeddings = data['token_embeddings'] # Shape: [num_tokens, 7168]
103
- species_name = data['species_name']
104
-
105
- print(f"Species: {species_name}")
106
- print(f"Mean embedding shape: {mean_embedding.shape}")
107
- print(f"Token embeddings shape: {token_embeddings.shape}")
108
-
109
- # Download and load token mapping
110
- token_path = hf_hub_download(
111
- repo_id=repo_id,
112
- filename=f"tokens/{species_id}.csv",
113
- repo_type="dataset"
114
- )
115
- tokens = pd.read_csv(token_path)
116
- ```
117
-
118
- ### Batch Download
119
-
120
- ```python
121
- from huggingface_hub import snapshot_download
122
-
123
- # Download entire dataset
124
- local_dir = snapshot_download(
125
- repo_id="deepearth/central_florida_native_plants",
126
- repo_type="dataset",
127
- local_dir="./florida_plants"
128
- )
129
- ```
130
-
131
- ## Additional Information
132
-
133
- ### Dataset Curators
134
-
135
- This dataset was created by the [DeepEarth Project](https://github.com/legel/deepearth) to enable machine learning research on biodiversity and ecology.
136
-
137
- ### Licensing Information
138
-
139
- This dataset is licensed under the Creative Commons Attribution 4.0 International License (CC-BY-4.0).
140
-
141
- ### Citation Information
142
-
143
- ```bibtex
144
- @dataset{deepearth_florida_plants_2024,
145
- title={Central Florida Native Plants Language Embeddings},
146
- author={DeepEarth Project},
147
- year={2024},
148
- publisher={Hugging Face},
149
- howpublished={\url{https://huggingface.co/datasets/deepearth/central_florida_native_plants}}
150
- }
151
- ```
152
-
153
- ### Contributions
154
-
155
- Thanks to [@legel](https://github.com/legel) for creating this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Dataset viewer for Central Florida Native Plants embeddings
4
+ """
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
+ import os
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  import torch
8
  import pandas as pd
9
+ import numpy as np
10
+ from pathlib import Path
11
+
12
+ def load_species_list():
13
+ """Load list of species from embeddings directory"""
14
+ embeddings_dir = Path(__file__).parent / "embeddings"
15
+ if not embeddings_dir.exists():
16
+ print("Error: embeddings directory not found. Please run download_dataset.sh first.")
17
+ return []
18
+
19
+ species_ids = []
20
+ for file in sorted(embeddings_dir.glob("*.pt")):
21
+ species_id = file.stem
22
+ species_ids.append(species_id)
23
+
24
+ return species_ids
25
+
26
+ def load_embedding(species_id):
27
+ """Load embedding for a specific species"""
28
+ embedding_path = Path(__file__).parent / "embeddings" / f"{species_id}.pt"
29
+ if not embedding_path.exists():
30
+ return None
31
+ return torch.load(embedding_path)
32
+
33
+ def load_tokens(species_id):
34
+ """Load token mapping for a specific species"""
35
+ token_path = Path(__file__).parent / "tokens" / f"{species_id}.csv"
36
+ if not token_path.exists():
37
+ return None
38
+ return pd.read_csv(token_path)
39
+
40
+ def analyze_dataset():
41
+ """Analyze the dataset and print summary statistics"""
42
+ species_ids = load_species_list()
43
+
44
+ print(f"Total species: {len(species_ids)}")
45
+ print("\nFirst 10 species IDs:")
46
+ for i, species_id in enumerate(species_ids[:10]):
47
+ print(f" {i+1}. {species_id}")
48
+
49
+ if species_ids:
50
+ # Analyze first species as example
51
+ example_id = species_ids[0]
52
+ data = load_embedding(example_id)
53
+ tokens = load_tokens(example_id)
54
+
55
+ print(f"\nExample species: {example_id}")
56
+ print(f"Species name: {data['species_name']}")
57
+ print(f"Taxon ID: {data['taxon_id']}")
58
+ print(f"Number of tokens: {data['num_tokens']}")
59
+
60
+ # Mean embedding info
61
+ mean_emb = data['mean_embedding']
62
+ print(f"\nMean embedding:")
63
+ print(f" Shape: {mean_emb.shape}")
64
+ print(f" Dtype: {mean_emb.dtype}")
65
+ print(f" Min/Max: {mean_emb.min():.4f} / {mean_emb.max():.4f}")
66
+ print(f" Mean/Std: {mean_emb.mean():.4f} / {mean_emb.std():.4f}")
67
+
68
+ # Show first 10 and last 10 values of mean embedding
69
+ print(f"\n First 10 values: {mean_emb[:10].numpy()}")
70
+ print(f" Last 10 values: {mean_emb[-10:].numpy()}")
71
+
72
+ # Token embeddings info
73
+ token_embs = data['token_embeddings']
74
+ print(f"\nToken embeddings:")
75
+ print(f" Shape: {token_embs.shape}")
76
+ print(f" Per-token dimension: {token_embs.shape[1]}")
77
+
78
+ # Show embedding values for first token
79
+ print(f"\n First token embedding (first 10 dims): {token_embs[0, :10].numpy()}")
80
+ print(f" First token embedding (last 10 dims): {token_embs[0, -10:].numpy()}")
81
+
82
+ # Show embedding statistics
83
+ print(f"\n Token embeddings statistics:")
84
+ print(f" Min/Max across all: {token_embs.min():.4f} / {token_embs.max():.4f}")
85
+ print(f" Mean/Std across all: {token_embs.mean():.4f} / {token_embs.std():.4f}")
86
+
87
+ if tokens is not None:
88
+ print(f"\nToken information:")
89
+ print(f"Number of tokens in CSV: {len(tokens)}")
90
+ print("\nFirst 5 tokens:")
91
+ print(tokens.head())
92
+
93
+ # Reconstruct text
94
+ text = ''.join(tokens['token'].tolist())
95
+ print(f"\nReconstructed text: {text}")
96
+
97
+ def compute_similarity_matrix(n_samples=10):
98
+ """Compute pairwise cosine similarities between species using mean embeddings"""
99
+ species_ids = load_species_list()[:n_samples]
100
+
101
+ embeddings = []
102
+ species_names = []
103
+ for species_id in species_ids:
104
+ data = load_embedding(species_id)
105
+ if data is not None:
106
+ embeddings.append(data['mean_embedding'].numpy())
107
+ species_names.append(data['species_name'])
108
+
109
+ if len(embeddings) < 2:
110
+ print("Not enough embeddings to compute similarities")
111
+ return
112
+
113
+ # Stack embeddings
114
+ embeddings = np.stack(embeddings)
115
+
116
+ # Normalize embeddings
117
+ norms = np.linalg.norm(embeddings, axis=1, keepdims=True)
118
+ normalized = embeddings / norms
119
+
120
+ # Compute cosine similarity matrix
121
+ similarity_matrix = normalized @ normalized.T
122
+
123
+ print(f"\nCosine similarity matrix ({n_samples}x{n_samples}):")
124
+ print("Species:", species_names)
125
+ print("\nSimilarity matrix (first 5x5):")
126
+ print(similarity_matrix[:5, :5])
127
+
128
+ # Find most similar pairs
129
+ mask = np.triu(np.ones_like(similarity_matrix), k=1).astype(bool)
130
+ similarities = similarity_matrix[mask]
131
+ indices = np.argwhere(mask)
132
+
133
+ sorted_idx = np.argsort(similarities)[::-1]
134
+ print(f"\nMost similar pairs:")
135
+ for i in range(min(5, len(sorted_idx))):
136
+ idx = sorted_idx[i]
137
+ i1, i2 = indices[idx]
138
+ sim = similarities[idx]
139
+ print(f" {species_names[i1]} - {species_names[i2]}: {sim:.4f}")
140
+
141
+ def explore_species(species_id=None):
142
+ """Explore a specific species' embeddings in detail"""
143
+ species_ids = load_species_list()
144
+
145
+ if species_id is None:
146
+ # Pick a random species
147
+ import random
148
+ species_id = random.choice(species_ids)
149
+
150
+ if species_id not in species_ids:
151
+ print(f"Species ID {species_id} not found in dataset")
152
+ return
153
+
154
+ data = load_embedding(species_id)
155
+ tokens = load_tokens(species_id)
156
+
157
+ print(f"\nDetailed exploration of species: {species_id}")
158
+ print("=" * 60)
159
+ print(f"Species name: {data['species_name']}")
160
+ print(f"Taxon ID: {data['taxon_id']}")
161
+ print(f"Timestamp: {data.get('timestamp', 'N/A')}")
162
+
163
+ # Mean embedding analysis
164
+ mean_emb = data['mean_embedding']
165
+ print(f"\nMean Embedding Analysis:")
166
+ print(f" Dimension: {mean_emb.shape[0]}")
167
+ print(f" Norm (L2): {torch.norm(mean_emb).item():.4f}")
168
+ print(f" Top 5 positive values: {torch.topk(mean_emb, 5).values.numpy()}")
169
+ print(f" Top 5 negative values: {torch.topk(-mean_emb, 5).values.numpy() * -1}")
170
+
171
+ # Embedding statistics from stored data
172
+ if 'embedding_stats' in data:
173
+ stats = data['embedding_stats']
174
+ print(f"\nStored embedding statistics:")
175
+ for key, value in stats.items():
176
+ if isinstance(value, (int, float)):
177
+ print(f" {key}: {value:.4f}" if isinstance(value, float) else f" {key}: {value}")
178
+
179
+ # Token-level analysis
180
+ token_embs = data['token_embeddings']
181
+ print(f"\nToken-level Analysis:")
182
+ print(f" Number of tokens: {token_embs.shape[0]}")
183
+ print(f" Embedding dimension per token: {token_embs.shape[1]}")
184
+
185
+ if tokens is not None and len(tokens) > 0:
186
+ print(f"\nToken Details:")
187
+ for idx, row in tokens.iterrows():
188
+ if idx < 5: # Show first 5 tokens
189
+ token_emb = token_embs[idx]
190
+ print(f" Token {idx}: '{row['token']}' (ID: {row['token_id']})")
191
+ print(f" Norm: {torch.norm(token_emb).item():.4f}")
192
+ print(f" Mean: {token_emb.mean().item():.4f}, Std: {token_emb.std().item():.4f}")
193
+ print(f" First 5 dims: {token_emb[:5].numpy()}")
194
+
195
+ # Variance analysis across dimensions
196
+ print(f"\nDimensional Variance Analysis:")
197
+ dim_vars = mean_emb.var()
198
+ print(f" Overall variance: {dim_vars:.6f}")
199
+
200
+ # Find most variable dimensions
201
+ token_vars = token_embs.var(dim=0) # Variance across tokens for each dimension
202
+ top_var_dims = torch.topk(token_vars, 10).indices
203
+ print(f" Top 10 most variable dimensions across tokens: {top_var_dims.numpy()}")
204
+
205
+ return data, tokens
206
+
207
+ if __name__ == "__main__":
208
+ print("Central Florida Native Plants Dataset Viewer")
209
+ print("=" * 50)
210
+
211
+ analyze_dataset()
212
+ print("\n" + "=" * 50)
213
+ compute_similarity_matrix(n_samples=10)
214
+ print("\n" + "=" * 50)
215
+ explore_species() # Explore a random species in detail