Datasets:
You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
The “ub-MOJI” dataset is provided by the AI Vision Laboratory at Tokyo Polytechnic University and is permitted for academic research purposes only.
By requesting access, you agree to the following conditions:
- Commercial use is prohibited
- Use for identification of individuals or invasion of privacy is prohibited
- Redistribution, transfer, or sublicensing is prohibited
- You are responsible for compliance with the laws of your country
- You may be required to delete the dataset if you violate the terms
By submitting a request, you agree to the above and to the full Terms of Use.
このデータセット「ub-MOJI」は、東京工芸大学映像情報処理研究室により提供されるもので、学術研究目的に限って使用が許可されます。利用にあたっては、以下の規約に同意していただく必要があります。
- 営利目的での使用は禁止されています
- 個人の識別やプライバシーの侵害を目的とした利用は禁止されています
- 再配布・譲渡・サブライセンスは禁止されています
- 利用機関は、該当国の法令にも従う責任を負います
- 規約違反時には使用停止と削除が求められます
本データセットを申請することで、上記および本規約全文の内容に同意したものとみなされます。
Log in or Sign Up to review the conditions and access this dataset content.
ub-MOJI

Overview
ub-MOJI is a Japanese fingerspelling video dataset designed to advance research in sign language recognition. The name "ub-MOJI" is inspired by the Japanese word for fingerspelling, yubimoji (指文字). The dataset consists of video recordings of fingerspelling gestures performed in Japanese Sign Language (JSL), systematically organized into three levels of linguistic granularity:
- Single characters: isolated kana units
- Five-character sequences: consecutive kana sequences
- Complete words: fingerspelled Japanese words
This dataset aims to support multiple research tasks, including both supervised and self-supervised learning approaches to fingerspelling recognition, as well as sequential modeling for broader sign language understanding.
Please note that a portion of the dataset is not publicly available, as some participants did not provide consent for open release.
Download Instructions
Important: We strongly recommend specifying a dataset version to ensure reproducibility. The version follows a date-based format like
25.05
. See Versioning Policy for details.
Requirement
- Before downloading the ub-MOJI dataset, you must agree to the Terms of Use.
- You must log in to your hugging face account:
# Using uv (no need to install huggingface_hub manually)
uvx --from huggingface_hub huggingface-cli login
# or using pip
pip install huggingface_hub
huggingface-cli login
Using huggingface-cli
- Download a specific version to the "ub-moji" directory
# Using uv
uvx --from huggingface_hub huggingface-cli download kanglabs/ub-MOJI --repo-type dataset --local-dir ub-moji --revision {version}
# or using pip
huggingface-cli download kanglabs/ub-MOJI --repo-type dataset --local-dir ub-moji --revision {version}
Using Git
- Requires git-lfs.
git lfs install
git clone https://huggingface.co/datasets/kanglabs/ub-MOJI -b {version} --depth 1
Using Python library
Install the library:
uv add datasets
# or
pip install datasets
Load the dataset:
from datasets import load_dataset
dataset = load_dataset("kanglabs/ub-MOJI", revision="{version}")
Data Structure
The ub-MOJI dataset is organized into three subsets, each corresponding to a different linguistic unit of Japanese fingerspelling:
syllables/
: individual kana characters (organized by subdirectories)sequences/
: sequences of five kana characters (stored as flat files)words/
: fingerspelled full words (stored as flat files)
Each sample is stored as an RGB video file in .mp4
format. For sequences/
and words/
, corresponding .toml
files provide frame-level temporal annotations. Supplementary metadata in .csv
format summarizes information across all subsets.
File Naming Convention
Each file follows the format:
{content}_{participantID}_{yyyymm}_{take}.mp4
{content}
: kana syllable (e.g.,a
,ka
), sequence of kana (e.g.,aiueo
), or a full word (e.g.,kamakura
){participantID}
: participant identifier (e.g.,001
){yyyymm}
: recording year and month{take}
: take number (e.g.,t001
)
Metadata and Annotation
metadata.csv
: sample-level metadata, including class labels, participant IDs, and recording metadataparticipants.csv
: participant-level metadata (e.g., handedness, age group, etc.)annotations.toml
files provide time-series annotations for each character or word unit, facilitating temporal modeling tasks.
Data Fields
metadata.csv
This file contains metadata for each sample in the dataset. The columns are as follows:
Field Name | Type | Description |
---|---|---|
file_name |
str | File path of the video sample |
classes |
List[str] | Fingerspelled unit (e.g., ["a"] , ["ka", "ma", "ku", “ra"] ) |
category |
int | Linguistic unit category: 0=syllable , 1=sequence , or 2=word |
participant_id |
int | Participant identifier (e.g., 18 ) |
recording_date |
int | Year and month of recording (e.g., 202403 ) |
fps |
int | Frames per second (e.g., 30 ) |
participants.csv
This file includes metadata about the participants involved in recording.
Field Name | Type | Description |
---|---|---|
participant_id |
int | Participant identifier (e.g., "18" ) |
age_group |
str | Age decade group (e.g., "40" for age 40–49; "-1" if not provided) |
gender |
int | Gender category: 0=female , 1=male , "-1" if unspecified |
dominant_hand |
int | Dominant hand: 0=right , 1=left , "-1" if unspecified |
experience_years |
str | Years of sign language experience: one of "1-3" , "4-6" , ..., "51+" or "-1" |
hearing_level |
int | Self-reported hearing ability: 0 (no issue) to 4 (severe), or "-1"(unknown) |
face_visibility |
int | Face visibility consent: 1=agreed , 0=declined |
annotations.toml
This file contains time-aligned annotations for each fingerspelling video in the dataset. Each top-level TOML table represents a single video, identified by a unique video ID (e.g., "kamakura_018_202310_t001"
).
["<video_id>"]
duration = <float>
fps = <float>
[["<video_id>".annotations]]
label = "<str>"
label_id = <int>
segment = [<float>, <float>]
Field Name | Type | Description |
---|---|---|
"<video_id>" |
str | Unique identifier for each video (includes participant and date metadata) |
duration |
float | Total duration of the video in seconds |
fps |
float | Frames per second (e.g., 60.0 ) |
annotations |
List[dict] | List of annotated segments for the video |
label |
str | Fingerspelled unit label (e.g., "ka" , "ma" ) |
label_id |
int | Integer class index assigned to the label |
segment |
List[float] | Start and end time in seconds (e.g., [1.2, 2.8] ) |
License and Terms of Use
The ub-MOJI dataset is available exclusively for non-commercial academic research.
Access to the dataset is gated on Hugging Face Datasets, and requires users to agree to the full terms of use before downloading.
By using the dataset, you agree to:
- Use the data for non-commercial, academic purposes only
- Not redistribute the data
- Properly cite the dataset in any publications or derivative works
For the full license and conditions, please refer to License and Terms of Use.
Versioning Policy
The ub-MOJI dataset follows a date-based versioning scheme, formatted as YY.MM
. For example, 25.05
refers to the May 2025 release.
Each release may include:
- New samples (e.g., additional participants or word entries)
- Annotation refinements
- Structural or metadata schema changes
We recommend citing the specific version used in your experiments or publications to ensure reproducibility.
For details about changes in each release, please refer to the CHANGELOG.
Authors & Contributors
Authors
- Tamon Kondo (Graduate School of Engineering, Tokyo Polytechnic University)
- Ryota Murai (Graduate School of Engineering, Tokyo Polytechnic University)
- Naoto Tsuta (Department of Engineering, Tokyo Polytechnic University)
- Yousun Kang (Faculty of Engineering, Tokyo Polytechnic University)
Contributors
- Natsuki Yamanaka (Faculty of Arts, Tokyo Polytechnic University)
- Rei Aoki (Faculty of Arts, Tokyo Polytechnic University)
- Fumitaka Ono (Faculty of Arts, Tokyo Polytechnic University)
- Yonguk Lee (Faculty of Arts, Tokyo Polytechnic University)
Affiliations are listed as of the time the dataset was developed.
Acknowledgement
This dataset was made possible with the generous support of the following organizations and individuals:
- This work was supported by JSPS KAKENHI Grant Numbers JP25K15166.
- This work was supported by Co-G.E.I. (Cooperative Good Educational Innovation) Challenge 2023–2024, Tokyo Polytechnic University.
- We would like to thank the Tama City Council of Social Welfare and Tama City Sign Language Group "Clover" for their valuable cooperation in this project.
- We also express our sincere gratitude to all the participants who took part in the video recordings.
Citation
@misc{ubmoji2025,
title = {ub-MOJI: A Japanese Fingerspelling Video Dataset},
author = {Kondo, Tamon and Murai, Ryota and Tsuta, Naoto and Kang, Yousun},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/kanglabs/ub-MOJI}},
note = {Available for non-commercial academic use only}
}
- Downloads last month
- 206