ecodash's picture
Upload README.md with huggingface_hub
ee98a76 verified
|
raw
history blame
4.11 kB
metadata
license: mit
task_categories:
  - feature-extraction
language:
  - en
tags:
  - biology
  - ecology
  - plants
  - embeddings
  - florida
  - biodiversity
pretty_name: Central Florida Native Plants Language Embeddings
size_categories:
  - n<1K

Central Florida Native Plants Language Embeddings

This dataset contains language embeddings for 232 native plant species from Central Florida, extracted using the DeepSeek-V3 language model.

Dataset Description

  • Curated by: DeepEarth Project
  • Language(s): English
  • License: MIT

Dataset Summary

This dataset provides pre-computed language embeddings for Central Florida plant species. Each species has been encoded using the prompt "Ecophysiology of {species_name}:" to capture semantic information about the plant's ecological characteristics.

Dataset Structure

Data Instances

Each species is represented by:

  • A PyTorch file (.pt) containing a dictionary with embeddings and metadata
  • A CSV file containing the token mappings

Embedding File Structure

Each .pt file contains a dictionary with:

  • mean_embedding: Tensor of shape [7168] - mean-pooled embedding across all tokens
  • token_embeddings: Tensor of shape [num_tokens, 7168] - individual token embeddings
  • species_name: String - the species name
  • taxon_id: String - GBIF taxon ID
  • num_tokens: Integer - number of tokens (typically 18-20)
  • embedding_stats: Dictionary with embedding statistics
  • timestamp: String - when the embedding was created

Token Mapping Structure

Token mapping CSV files contain:

  • position: Token position in sequence
  • token_id: Token ID in model vocabulary
  • token: Token string representation

Data Splits

This dataset contains a single split with embeddings for all 232 species.

Dataset Creation

Model Information

  • Model: DeepSeek-V3-0324-UD-Q4_K_XL
  • Parameters: 671B (4.5-bit quantized GGUF format)
  • Embedding Dimension: 7168
  • Context: 2048 tokens
  • Prompt Template: "Ecophysiology of {species_name}:"

Source Data

Species names are based on GBIF (Global Biodiversity Information Facility) taxonomy for plants native to Central Florida.

Usage

Loading Embeddings

import torch
import pandas as pd
from huggingface_hub import hf_hub_download

# Download a specific embedding
repo_id = "deepearth/central_florida_native_plants"
species_id = "2650927"  # Example GBIF ID

# Download embedding file
embedding_path = hf_hub_download(
    repo_id=repo_id,
    filename=f"embeddings/{species_id}.pt",
    repo_type="dataset"
)

# Load embedding dictionary
data = torch.load(embedding_path)

# Access embeddings
mean_embedding = data['mean_embedding']  # Shape: [7168]
token_embeddings = data['token_embeddings']  # Shape: [num_tokens, 7168]
species_name = data['species_name']

print(f"Species: {species_name}")
print(f"Mean embedding shape: {mean_embedding.shape}")
print(f"Token embeddings shape: {token_embeddings.shape}")

# Download and load token mapping
token_path = hf_hub_download(
    repo_id=repo_id,
    filename=f"tokens/{species_id}.csv",
    repo_type="dataset"
)
tokens = pd.read_csv(token_path)

Batch Download

from huggingface_hub import snapshot_download

# Download entire dataset
local_dir = snapshot_download(
    repo_id="deepearth/central_florida_native_plants",
    repo_type="dataset",
    local_dir="./florida_plants"
)

Additional Information

Dataset Curators

This dataset was created by the DeepEarth Project to enable machine learning research on biodiversity and ecology.

Licensing Information

This dataset is licensed under the MIT License.

Citation Information

@dataset{deepearth_florida_plants_2025,
  title={Central Florida Native Plants Language Embeddings},
  author={DeepEarth Project},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/deepearth/central_florida_native_plants}}
}

Contributions

Thanks to @legel for creating this dataset.