Dataset Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Seamless Interaction Dataset

Seamless Interaction Dataset Banner

A large-scale multimodal dataset of 4,000+ hours of human interactions for AI research

๐Ÿ–ผ๏ธ Blog ๐ŸŒ Website ๐ŸŽฎ Demo ๐Ÿ“ฆ GitHub ๐Ÿ“„ Paper

Human communication involves a complex interplay of verbal and nonverbal signals, essential for conveying meaning and achieving interpersonal goals.

The Seamless Interaction Dataset is a large-scale collection of over 4,000 hours of face-to-face interaction footage from more than 4,000 participants in diverse contexts. This dataset enables the development of AI technologies that understand human interactions and communication, unlocking breakthroughs in:

  • ๐Ÿค– Virtual agents and embodied AI
  • ๐ŸŽญ Natural human-computer interaction
  • ๐Ÿ“ก Advanced telepresence experiences
  • ๐Ÿ“Š Multimodal content analysis tools
  • ๐ŸŽฌ Animation and synthetic content generation

๐Ÿš€ Quick Start

git clone https://github.com/facebookresearch/seamless-interaction
cd seamless-interaction
pip install -e .
streamlit run src/seamless_interaction/app/Welcome.py

# if you use uv
uv sync
uv run streamlit run src/seamless_interaction/app/Welcome.py

Explore the dataset with our interactive browser:

Features:

  • ๐Ÿ” Hierarchical Navigation: Browse by Label โ†’ Split โ†’ Batch โ†’ Interaction
  • ๐ŸŽฒ Random Sampling: Discover interactions with one-click random selection
  • ๐Ÿ“ฅ Download Interface: Download specific batches with size estimation and progress tracking
  • ๐ŸŽฌ Video Viewer: Side-by-side participant videos with synchronized playback
  • ๐Ÿ“Š Data Analysis: Overview statistics and distribution plots
  • ๐Ÿ“ File Management: Organize and preview audio, JSON, and NPZ files with expandable dropdowns

Download Options

We provide comprehensive download methods supporting all research scales and requirements:

Scale Size Method Use Case Script Sampling
๐Ÿ” Single Example ~100MB S3 Quick exploration, understanding data structure download_s3.py Auto-sample from preferred vendors
๐Ÿ‘ฅ Interaction Pair ~200MB S3 Study conversational dynamics between participants download_s3.py Auto-detect conversation pairs
๐Ÿ“‚ Sample Set ~1GB S3/HF Initial prototyping, algorithm development download_s3.py, download_hf.py File selection or archive-based
๐ŸŽฏ Session Groups ~400MB S3 Deep conversational context, session dynamics download_s3.py Auto-sample rich sessions
๐Ÿ“ฆ Single Batch ~50GB HF Substantial local development, full exploration download_hf.py WebDataset tarball download
๐Ÿ—‚๏ธ Multiple Batches ~150GB+ HF Training datasets, large-scale analysis download_hf.py WebDataset tarball download
๐ŸŽฏ Different Splits Variable HF Cross-validation (train/dev/test, improvised/naturalistic) download_hf.py WebDataset tarball download
๐ŸŒ Whole Dataset ~27TB HF Complete research dataset, production systems download_hf.py WebDataset tarball download

Basic Data Loading (HF + WebDataset)

from datasets import load_dataset

# configure
label = "improvised"
split = "dev"
batch_idx = 0
archive_list = [0, 1]

base_url = (
    f"https://huggingface.co/datasets/facebook/"
    f"seamless-interaction/resolve/main/{label}/{split}/"
    "{batch_idx:04d}/{archive_idx:04d}.tar"
)
urls = [base_url.format(batch_idx=batch_idx, archive_idx=archive_idx) for archive_idx in archive_list]
dataset = load_dataset(
    "webdataset", data_files={split: urls}, split=split, streaming=True
)

for item in dataset:
    break

isinstance(item["mp4"], bytes)
# True
item["npz"].keys()
# dict_keys(['boxes_and_keypoints:box', 'boxes_and_keypoints:is_valid_box', 'boxes_and_keypoints:keypoints', 'movement:EmotionArousalToken', 'movement:EmotionValenceToken', 'movement:FAUToken', 'movement:FAUValue', 'movement:alignment_head_rotation', 'movement:alignment_translation', 'movement:emotion_arousal', 'movement:emotion_scores', 'movement:emotion_valence', 'movement:expression', 'movement:frame_latent', 'movement:gaze_encodings', 'movement:head_encodings', 'movement:hypernet_features', 'movement:is_valid', 'smplh:body_pose', 'smplh:global_orient', 'smplh:is_valid', 'smplh:left_hand_pose', 'smplh:right_hand_pose', 'smplh:translation'])
item["json"].keys()
# dict_keys(['id', 'metadata:transcript', 'metadata:vad'])
item["wav"].keys()
# dict_keys(['path', 'array', 'sampling_rate'])

๐Ÿ“ฆ Deep Dive into the Dataset

Dataset Structure

The Seamless Interaction Dataset is organized into two main categories/labels:

  • Improvised: Interactions primarily based on predefined scenarios with guided prompts with at least one professional actor.
  • Naturalistic: Prompted conversations that can be carried out by normal people.
seamless_interaction
โ”œโ”€โ”€ improvised                # Interactions with guided prompts
โ”‚   โ”œโ”€โ”€ dev
โ”‚   โ”‚   โ”œโ”€โ”€ 1P-IS/            # First-party internal state annotations
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ”‚   โ”‚   โ”œโ”€โ”€ 1P-R/             # First-party internal state rationale annotations
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ”‚   โ”‚   โ”œโ”€โ”€ 3P-IS/            # Third-party internal state annotations
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ”‚   โ”‚   โ”œโ”€โ”€ 3P-R/             # Third-party internal state rationale annotations
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ”‚   โ”‚   โ”œโ”€โ”€ 3P-V/             # Third-party visual annotation
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ”‚   โ”‚   โ”œโ”€โ”€ audio/            # Speaker-bleed denoised audio
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.wav
โ”‚   โ”‚   โ”œโ”€โ”€ boxes_and_keypoints/
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ box/          # Bounding boxes for each participant
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ is_valid_box/ # Whether bounding boxes are valid
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ keypoints/    # Detected facial/body keypoints
โ”‚   โ”‚   โ”œโ”€โ”€ movement/         # Quantified Imitator movement features
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ emotion_arousal/           # Arousal measures
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ emotion_valence/           # Valence measures
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ emotion_scores/            # Emotion detection scores
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ expression/                # Facial expression parameters
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ FAUToken/                  # Facial Action Unit tokens
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ FAUValue/                  # Facial Action Unit values
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ gaze_encodings/            # Eye gaze direction encodings
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ head_encodings/            # Head position/rotation encodings
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ frame_latent/              # Per-frame latent representations
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ is_valid/                  # Validity flags for extracted features
โ”‚   โ”‚   โ”œโ”€โ”€ smplh/            # SMPL-H body model parameters
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ body-pose/    # Body pose parameters
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ global_orient/ # Global orientation parameters
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ is_valid/     # Valid frames indicators
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ left_hand_pose/ # Left hand pose parameters
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ right_hand_pose/ # Right hand pose parameters
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ translation/  # Global translation parameters
โ”‚   โ”‚   โ”œโ”€โ”€ transcript/       # Time-aligned speech transcription
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.jsonl
โ”‚   โ”‚   โ”œโ”€โ”€ vad/              # Voice activity detection
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.jsonl
โ”‚   โ”‚   โ””โ”€โ”€ video/            # Raw HD video recordings
โ”‚   โ”‚       โ””โ”€โ”€ V<vendor>_S<session>_I<interaction>_P<participant>.mp4
โ”‚   โ”œโ”€โ”€ test/                 # Test split with similar structure
โ”‚   โ””โ”€โ”€ train/                # Training split with similar structure
โ””โ”€โ”€ naturalistic/             # Spontaneous conversations
    โ”œโ”€โ”€ dev/                  # Same structure as improvised/dev
    โ”œโ”€โ”€ test/                 # Same structure as improvised/test
    โ””โ”€โ”€ train/                # Same structure as improvised/train

Each file is named according to a consistent convention:

  • V<vendor_id>: Collection site/vendor identifier
  • S<session_id>: Unique session identifier
  • I<interaction_id>: Specific interaction within a session
  • P<participant_id>: Individual participant identifier

Available Modalities and Features

Each interaction in the dataset includes:

Modality Description File Format Sample Rate
๐ŸŽฅ Video High-definition face-to-face footage MP4 (H.264) 30/29.97 FPS, 1080p
๐ŸŽ™๏ธ Audio Denoised audio with separate channels WAV 48kHz, 16-bit
๐Ÿ“ Transcript Time-aligned speech transcription JSONL -
๐Ÿƒ SMPL-H 3D body model parameters NPY 30 Hz
๐Ÿง  Imitator Movement Features Comprehensive quantified imitator movement data NPY 30 Hz
๐Ÿ“Š Annotations Human-annotated behavioral data JSON -
๐Ÿ”Š VAD Voice activity detection JSONL 100 Hz
๐Ÿ“ฆ Keypoints Face and body keypoints NPY 30 Hz

Annotation Types

The dataset includes several types of human annotations for rich behavioral analysis:

Annotation Hours Total Annotations Mean # Tokens
1P-IS (1st-party internal state annotations) 1.1 751 5.8
1P-R (1st-party internal state rationale annotations) 1.1 751 10.2
3P-IS (3rd-party internal state annotations) 4.7 5132 5.2
3P-R (3rd-party internal state rationale annotations) 4.7 5132 11.3
3P-V (3rd-party visual annotation) 4.7 5132 14.6

Please refer to the technical report for a more detailed overview of annotations.

Movement/Imitator Feature Types

The movement directory contains rich behavioral features (output of the Imitator model):

Feature Description
emotion_arousal Arousal intensity measurements
emotion_valence Valence (positive/negative) measurements
emotion_scores Detected emotion categorical scores
expression Parametric facial expression encodings
FAUToken/FAUValue Facial Action Unit tokens and intensity values
gaze_encodings Neural encodings of gaze direction
head_encodings Neural encodings of head position and rotation
frame_latent Per-frame latent representations
alignment_head_rotation Head rotation data for temporal alignment
alignment_translation Translation parameters for temporal alignment
EmotionArousalToken/EmotionValenceToken Discretized emotion tokens
hypernet_features Features from hypernetwork processing

Dataset Versions

The dataset is organized in self-contained batches for flexible exploration:

Split Batches Size per Batch Total Size Description
dev 5 ~50GB ~500GB Development/validation set
test 5 ~50GB ~500TB Hold-out test set
train 200+ ~50GB ~20TB+ Full training data

File Format Specifications

Our data is stored in the following formats for optimal usability:

Format Description Usage
NPZ NumPy array files Efficient storage of numerical feature vectors, keypoints, and parameters
JSONL JSON Lines Time-aligned annotations with one event per line (e.g., transcripts, VAD)
JSON JavaScript Object Notation Structured metadata and annotations with timestamps
MP4 MPEG-4 Part 14 High-quality compressed video with H.264 encoding
WAV Waveform Audio Uncompressed audio for highest fidelity processing

๐Ÿงช Research Applications

The Seamless Interaction Dataset enables research across multiple domains:

Embodied AI and Virtual Agents

  • Train agents that display natural gestures
  • Model turn-taking dynamics and interaction rhythms
  • Generate contextually appropriate responses to human behavior

Multimodal Understanding

  • Analyze cross-modal correlations between speech, gesture, and expressions
  • Extract behavioral patterns from large-scale interaction data
  • Develop models to understand social dynamics

Human-Computer Interaction

  • Design interfaces that respond to subtle human cues
  • Improve telepresence technologies with better behavioral modeling
  • Create more natural conversational agents

Animation and Content Creation

  • Generate realistic human behaviors for animated characters
  • Synthesize conversational dynamics for virtual production
  • Create training data for digital human technologies

โš ๏ธ Known Limitations and Noise in Metadata

Given the scale and complexity involved in collecting the Seamless Interaction dataset, there are several known limitations that we will address in our ongoing work, with improvements planned for in future versions:

Errors in Human-Based Time-Stamping

The core unit of the dataset is interactions. An interaction defines the active time during which a participantโ€™s conversation and behavior can be linked to a pair of prompts. We have observed instances of misaligned time-stamps, including:

  • Annotated start/end times may be too early or too late.
  • Occasional misalignment between prompt text and spoken material.
  • Ordering of prompts that may contain off-by-one errors.

Despite our efforts to automatically identify and correct these errors, approximately 10% of the interactions remain affected.

Time Stamping "Noise" in Moments of Interest (MOI)

While defining a MOI inherently involves some subjectivity, there are rare instances where:

  • The described behavior only represents a subset of the observed behavior.
  • The duration of the MOI does not fully capture the annotated behavior.

Incorrect Assignment of Participant IDs

In rare instances, we have observed:

  • Duplicate participant identifiers being assigned to different individuals.
  • The same individual being mapped to different identifiers.

Unreleased "Meta Time"

Currently, the dataset only contains active time segments - time in which two participants are actively responding to prompts. Meta time refers to the time between active segments in which participants are studying their new prompts, taking a break, etc. Meta time constitutes hundreds of hours in the raw collection and maybe be explored for future releases.

Variation in Recording Site Consistency

This multi-site project contains variation in:

  • Recording quality, including issues like speaker bleed and participants staying in frame.
  • Acting quality in Improvised segments.
  • The likelihood of time-stamping errors.

All vendors met our technical requirements; however,there is noticeable variation in production quality across different sites.

๐Ÿ“„ License & Data Usage Policy

The Seamless Interaction Dataset is licensed under CC-BY-NC 4.0 (Creative Commons Attribution-NonCommercial 4.0 International).

This means you are free to:

  • Share โ€” copy and redistribute the material in any medium or format
  • Adapt โ€” remix, transform, and build upon the material

Under the following terms:

  • Attribution โ€” You must give appropriate credit, provide a link to the license, and indicate if changes were made.
  • NonCommercial โ€” You may not use the material for commercial purposes without explicit permission.

๐Ÿ“‘ Citation

If you use the Seamless Interaction Dataset in your research, please cite:

BibTeX
@article{seamless_interaction,
  title={Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset},
  author={Vasu Agrawal and
        Akinniyi Akinyemi and
        Kathryn Alvero and
        Morteza Behrooz and
        Julia Buffalini and
        Fabio Maria Carlucci and
        Joy Chen and
        Junming Chen and
        Zhang Chen and
        Shiyang Cheng and
        Praveen Chowdary and
        Joe Chuang and
        Antony D'Avirro and
        Jon Daly and
        Ning Dong and
        Mark Duppenthaler and
        Cynthia Gao and
        Jeff Girard and
        Martin Gleize and
        Sahir Gomez and
        Hongyu Gong and
        Srivathsan Govindarajan and
        Brandon Han and
        Sen He and
        Denise Hernandez and
        Yordan Hristov and
        Rongjie Huang and
        Hirofumi Inaguma and
        Somya Jain and
        Raj Janardhan and
        Qingyao Jia and
        Christopher Klaiber and
        Dejan Kovachev and
        Moneish Kumar and
        Hang Li and
        Yilei Li and
        Pavel Litvin and
        Wei Liu and
        Guangyao Ma and
        Jing Ma and
        Martin Ma and
        Xutai Ma and
        Lucas Mantovani and
        Sagar Miglani and
        Sreyas Mohan and
        Louis-Philippe Morency and
        Evonne Ng and
        Kam-Woh Ng and
        Tu Anh Nguyen and
        Amia Oberai and
        Benjamin Peloquin and
        Juan Pino and
        Jovan Popovic and
        Omid Poursaeed and
        Fabian Prada and
        Alice Rakotoarison and
        Alexander Richard and
        Christophe Ropers and
        Safiyyah Saleem and
        Vasu Sharma and
        Alex Shcherbyna and
        Jia Shen and
        Jie Shen and
        Anastasis Stathopoulos and
        Anna Sun and
        Paden Tomasello and
        Tuan Tran and
        Arina Turkatenko and
        Bo Wan and
        Chao Wang and
        Jeff Wang and
        Mary Williamson and
        Carleigh Wood and
        Tao Xiang and
        Yilin Yang and
        Zhiyuan Yao and
        Chen Zhang and
        Jiemin Zhang and
        Xinyue Zhang and
        Jason Zheng and
        Pavlo Zhyzheria and
        Jan Zikes and
        Michael Zollhoefer
  },
  url={https://ai.meta.com/research/publications/seamless-interaction-dyadic-audiovisual-motion-modeling-and-large-scale-dataset/},
  year={2025}
}

๐Ÿ™ Acknowledgments

This project was made possible thanks to contributions from:

  • The thousands of participants who provided interaction data
  • Our dedicated annotation and QA team
  • Research collaborators from multiple institutions
  • FAIR (Fundamental AI Research)
  • The open-source community for valuable tools and libraries
  • Our data collection partners across multiple sites
  • Meta Reality Labs for supporting this research initiative
Downloads last month
1