Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ Each species is represented by:
|
|
41 |
### Embedding File Structure
|
42 |
|
43 |
Each `.pt` file contains a dictionary with:
|
44 |
-
- `mean_embedding`: Tensor of shape `[7168]` - mean-pooled embedding across all tokens
|
45 |
- `token_embeddings`: Tensor of shape `[num_tokens, 7168]` - individual token embeddings
|
46 |
- `species_name`: String - the species name
|
47 |
- `taxon_id`: String - GBIF taxon ID
|
@@ -49,6 +49,22 @@ Each `.pt` file contains a dictionary with:
|
|
49 |
- `embedding_stats`: Dictionary with embedding statistics
|
50 |
- `timestamp`: String - when the embedding was created
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
### Token Mapping Structure
|
53 |
|
54 |
Token mapping CSV files contain:
|
@@ -60,6 +76,16 @@ Token mapping CSV files contain:
|
|
60 |
|
61 |
This dataset contains a single split with embeddings for all 232 species.
|
62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
## Dataset Creation
|
64 |
|
65 |
### Model Information
|
@@ -98,7 +124,7 @@ embedding_path = hf_hub_download(
|
|
98 |
data = torch.load(embedding_path)
|
99 |
|
100 |
# Access embeddings
|
101 |
-
mean_embedding = data['mean_embedding'] # Shape: [7168]
|
102 |
token_embeddings = data['token_embeddings'] # Shape: [num_tokens, 7168]
|
103 |
species_name = data['species_name']
|
104 |
|
@@ -106,6 +132,10 @@ print(f"Species: {species_name}")
|
|
106 |
print(f"Mean embedding shape: {mean_embedding.shape}")
|
107 |
print(f"Token embeddings shape: {token_embeddings.shape}")
|
108 |
|
|
|
|
|
|
|
|
|
109 |
# Download and load token mapping
|
110 |
token_path = hf_hub_download(
|
111 |
repo_id=repo_id,
|
|
|
41 |
### Embedding File Structure
|
42 |
|
43 |
Each `.pt` file contains a dictionary with:
|
44 |
+
- `mean_embedding`: Tensor of shape `[7168]` - mean-pooled embedding across all tokens (including prompt)
|
45 |
- `token_embeddings`: Tensor of shape `[num_tokens, 7168]` - individual token embeddings
|
46 |
- `species_name`: String - the species name
|
47 |
- `taxon_id`: String - GBIF taxon ID
|
|
|
49 |
- `embedding_stats`: Dictionary with embedding statistics
|
50 |
- `timestamp`: String - when the embedding was created
|
51 |
|
52 |
+
### Dataset Viewer Structure
|
53 |
+
|
54 |
+
The Parquet files in the dataset viewer contain:
|
55 |
+
- `taxon_id`: GBIF taxonomic identifier
|
56 |
+
- `species_name`: Scientific name of the plant species
|
57 |
+
- `timestamp`: When the embedding was created
|
58 |
+
- `token_position`: Position of token in sequence
|
59 |
+
- `token_id`: Token ID in model vocabulary
|
60 |
+
- `token_str`: String representation of token
|
61 |
+
- `is_species_token`: Whether this token is part of the species name
|
62 |
+
- `token_embedding`: 7168-dimensional embedding vector for this specific token
|
63 |
+
- `species_mean_embedding`: 7168-dimensional mean embedding of species name tokens only
|
64 |
+
- `all_tokens_mean_embedding`: 7168-dimensional mean embedding across all tokens (including prompt)
|
65 |
+
- `num_tokens`: Total number of tokens for this species
|
66 |
+
- `num_species_tokens`: Number of tokens that are part of the species name
|
67 |
+
|
68 |
### Token Mapping Structure
|
69 |
|
70 |
Token mapping CSV files contain:
|
|
|
76 |
|
77 |
This dataset contains a single split with embeddings for all 232 species.
|
78 |
|
79 |
+
## Important Note on Embeddings
|
80 |
+
|
81 |
+
This dataset provides two types of mean embeddings:
|
82 |
+
|
83 |
+
1. **`species_mean_embedding`** (in dataset viewer): The mean embedding calculated from ONLY the tokens that represent the species name itself. This provides a more focused representation of the species.
|
84 |
+
|
85 |
+
2. **`all_tokens_mean_embedding`** or `mean_embedding` (in .pt files): The mean embedding calculated from ALL tokens in the prompt, including "Ecophysiology of", the species name, and the ":" token. This is the original embedding as extracted from the model.
|
86 |
+
|
87 |
+
For most use cases, `species_mean_embedding` is recommended as it captures the semantic representation of the species name without the influence of the prompt template.
|
88 |
+
|
89 |
## Dataset Creation
|
90 |
|
91 |
### Model Information
|
|
|
124 |
data = torch.load(embedding_path)
|
125 |
|
126 |
# Access embeddings
|
127 |
+
mean_embedding = data['mean_embedding'] # Shape: [7168] - mean of all tokens
|
128 |
token_embeddings = data['token_embeddings'] # Shape: [num_tokens, 7168]
|
129 |
species_name = data['species_name']
|
130 |
|
|
|
132 |
print(f"Mean embedding shape: {mean_embedding.shape}")
|
133 |
print(f"Token embeddings shape: {token_embeddings.shape}")
|
134 |
|
135 |
+
# For species-only mean embedding, use the dataset viewer or compute from species tokens
|
136 |
+
# The dataset viewer provides 'species_mean_embedding' which is the mean of only
|
137 |
+
# the tokens that are part of the species name (excluding prompt tokens)
|
138 |
+
|
139 |
# Download and load token mapping
|
140 |
token_path = hf_hub_download(
|
141 |
repo_id=repo_id,
|