The Dataset Viewer has been disabled on this dataset.

Atlas Export

Generate and deploy interactive embedding visualizations to HuggingFace Spaces with a single command using Apple's Embedding Atlas library!

Example Visualization

Scripts

This repo contains two scripts:

Script Description Best for
atlas-export.py All-in-one: embeds data into the Space Small/medium datasets (<10GB)
atlas-export-remote.py Splits data into a separate HF dataset repo Large datasets (10GB+), images

Why remote?

atlas-export-remote.py stores the parquet data in a separate HF dataset repository and the viewer in a lightweight Space. The viewer loads data on-demand via HTTP range requests — no storage limits on the Space side. This uses embedding-atlas >= 0.18.0's native --export-metadata for clean remote data loading.

Quick Start

Basic (data embedded in Space)

uv run atlas-export.py stanfordnlp/imdb --space-name my-imdb-viz

Remote data (recommended for larger datasets)

uv run atlas-export-remote.py stanfordnlp/imdb \
    --space-name my-imdb-viz \
    --data-repo my-imdb-data

Examples

Text Datasets

# Custom embedding model with sampling
uv run atlas-export-remote.py wikipedia \
    --space-name wiki-viz \
    --data-repo wiki-atlas-data \
    --model nomic-ai/nomic-embed-text-v1.5 \
    --text-column text \
    --sample 50000

Image Datasets

# Visualize image datasets with CLIP
uv run atlas-export-remote.py food101 \
    --space-name food-atlas \
    --data-repo food-atlas-data \
    --image-column image \
    --text-column label \
    --sample 5000

Pre-computed Embeddings

# If you already have embeddings in your dataset
uv run atlas-export.py my-dataset-with-embeddings \
    --space-name my-viz \
    --no-compute-embeddings \
    --x-column umap_x \
    --y-column umap_y

From an Existing Export

# Use an atlas export ZIP you already have
uv run atlas-export-remote.py \
    --from-export atlas_export.zip \
    --space-name my-viz \
    --data-repo my-data

GPU Acceleration (HF Jobs)

# Run on HF Jobs with GPU — the recommended way for large datasets
hf jobs uv run --flavor t4-small -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/build-atlas/raw/main/atlas-export-remote.py \
    stanfordnlp/imdb \
    --space-name imdb-viz \
    --data-repo imdb-atlas-data \
    --sample 10000

# With a bigger GPU for faster processing
hf jobs uv run --flavor a10g-large -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/build-atlas/raw/main/atlas-export-remote.py \
    your-dataset \
    --space-name your-atlas \
    --data-repo your-atlas-data \
    --text-column output \
    --sample 50000

Available GPU flavors: t4-small, t4-medium, l4x1, a10g-small, a10g-large.

Key Options

atlas-export.py

Option Description Default
dataset_id HuggingFace dataset to visualize Required
--space-name Name for your Space Required
--model Embedding model to use Auto-selected
--text-column Column containing text "text"
--image-column Column containing images None
--sample Number of samples to visualize All
--batch-size Batch size for embedding generation 32 (text), 16 (images)
--split Dataset split to use "train"

atlas-export-remote.py

Option Description Default
dataset_id HuggingFace dataset to visualize Required
--space-name Name for your Space Required
--data-repo Name for the HF dataset repo (stores parquet) Required
--model Embedding model to use Auto-selected
--text-column Column containing text "text"
--image-column Column containing images None
--sample Number of samples to visualize All
--split Dataset split to use "train"
--from-export Use an existing atlas export ZIP None
--organization HF org for repos (default: your username) None
--private Make both Space and dataset private False
--local-only Prepare locally without deploying False

Run either script without arguments to see all options.

How It Works

atlas-export.py

  1. Loads dataset from HuggingFace Hub
  2. Generates embeddings (or uses pre-computed)
  3. Creates static web app with embedded data
  4. Deploys to HF Space

atlas-export-remote.py

  1. Loads dataset and generates embeddings
  2. Exports viewer with --export-metadata pointing to the remote parquet URL
  3. Uploads parquet to a HF dataset repo
  4. Deploys the lightweight viewer (~100MB) to a HF Space
  5. The viewer fetches data on-demand via HTTP range requests

Credits

Built on Embedding Atlas by Apple (>= 0.18.0 for remote data support). See the documentation for more details.


Part of the UV Scripts collection 🚀

Downloads last month
42