datasetId
large_stringlengths 6
110
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-07 08:14:41
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-07 08:13:27
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
mlfoundations-dev/openthoughts2_science | mlfoundations-dev | 2025-05-05T02:39:46Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T02:39:33Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: _domain
dtype: string
- name: system
dtype: string
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: question
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
- name: extracted_instruction
dtype: string
splits:
- name: train
num_bytes: 83553673.88687341
num_examples: 4923
download_size: 26667379
dataset_size: 83553673.88687341
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/openthoughts2_code | mlfoundations-dev | 2025-05-05T02:25:31Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T02:23:36Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: _domain
dtype: string
- name: system
dtype: string
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: question
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
- name: extracted_instruction
dtype: string
splits:
- name: train
num_bytes: 3612495613.678592
num_examples: 212849
download_size: 2368372929
dataset_size: 3612495613.678592
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
THU-ATOM/DrugCLIP_data | THU-ATOM | 2025-05-05T02:12:28Z | 100 | 0 | [
"license:cc-by-4.0",
"region:us"
] | [] | 2024-08-29T06:36:29Z | null | ---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
# 🧬 DrugCLIP data repository
This repository hosts benchmark datasets, pre-computed molecular embeddings, pretrained model weights, and supporting files used in the **DrugCLIP** project. It also includes data and models used for **wet lab validation experiments**.
---
## 📁 Repository Contents
### 1. `DUD-E.zip`
- Full dataset for the **DUD-E benchmark**.
- Includes ligand and target files for all targets.
---
### 2. `LIT-PCBA.zip`
- Full dataset for the **LIT-PCBA benchmark**.
- Includes ligand and target files for all targets.
---
### 3. `encoded_mol_embs.zip`
- Pre-encoded molecular embeddings from the **ChemDiv** compound library.
- Each `.pkl` file contains:
- `name_list`: `[hitid, SMILES]`
- `embedding_list`: list of **128-dimensional** vectors
- Versions included:
- **8-fold** version of the full ChemDiv library
- **6-fold** version of the full ChemDiv library
- **6-fold** version of a filtered ChemDiv library
---
### 4. `benchmark_weights.zip`
Contains **pretrained model weights** for **benchmark experiments** on the DUD-E and LIT-PCBA datasets using various ligand and target filtering strategies.
#### 🔬 DUD-E: Ligand Filtering Strategies
| Filename | Description |
|----------------------|-------------|
| `dude_ecfp_90.pt` | Trained by removing ligands with **ECFP4 similarity > 0.9**. |
| `dude_ecfp_60.pt` | Trained by removing ligands with **ECFP4 similarity > 0.6**. |
| `dude_ecfp_30.pt` | Trained by removing ligands with **ECFP4 similarity > 0.3**. |
| `dude_scaffold.pt` | Trained by removing ligands sharing **scaffolds** with test set. |
#### 🧬 DUD-E: Target Filtering Strategies
| Filename | Description |
|------------------------|-------------|
| `dude_identity_90.pt` | Removed targets with **MMseqs2 identity > 0.9**. |
| `dude_identity_60.pt` | Removed targets with **MMseqs2 identity > 0.6**. |
| `dude_identity_30.pt` | Removed targets with **MMseqs2 identity > 0.3**. |
| `dude_identity_0.pt` | Removed targets based on **HMMER sequence identity**. |
#### 🧪 LIT-PCBA: Target Filtering Strategy
| Filename | Description |
|-------------------------|-------------|
| `litpcba_identity_90.pt`| Removed targets with **MMseqs2 identity > 0.9**. |
---
### 5. `model_weights.zip`
Contains model weights trained specifically for **wet lab experiments**. These models were trained using:
- **6-fold** data splits
- **8-fold** data splits
Used to predict compounds validated in real-world assays for the following targets:
- `5HT2a`
- `NET`
- `Trip12`
---
### 6. `WetLab_PDBs_and_LMDBs`
Target data used for wet lab validation experiments:
- **LMDB files**: For DrugCLIP screening
Includes data for:
- `5HT2a`
- `NET`
- `Trip12`
---
### 7. `benchmark_throughput`
Files for reproducing throughput benchmark results.
|
AlexHung29629/mistral_hh_dpo | AlexHung29629 | 2025-05-05T02:10:20Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T02:09:49Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
struct:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 349398088.0
num_examples: 36222
download_size: 153956350
dataset_size: 349398088.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iPentec/AutoComplete-Text-for-ComfyUI-Custom-Scripts | iPentec | 2025-05-05T02:09:58Z | 0 | 0 | [
"task_categories:token-classification",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"region:us",
"art"
] | [
"token-classification"
] | 2025-05-02T13:37:50Z | null | ---
task_categories:
- token-classification
language:
- en
tags:
- art
size_categories:
- 1M<n<10M
license: mit
license_name: ipentec-software-license
license_link: https://www.ipentec.com/contents/ipentec-software-license-non-profit
---
autocomplete.txt for ComfyUI-Custom-Scripts
## 🔄 2025-05-05 ver. Update Summary
This update includes the following changes:
- Replaced underscores (`_`) with spaces in tag names for better readability and consistency.
- Escaped parentheses by changing `(tag)` to `\(tag\)` to prevent unintended emphasis in ComfyUI prompt processing.
|
CinematicT2vData/cinepile_captions | CinematicT2vData | 2025-05-05T01:59:28Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:59:25Z | null | ---
dataset_info:
- config_name: base
features:
- name: video_id
dtype: string
- name: prompt
dtype: string
- name: caption_base
dtype: string
splits:
- name: train
num_bytes: 7657147
num_examples: 3488
download_size: 3703525
dataset_size: 7657147
- config_name: scene_text
features:
- name: video_id
dtype: string
- name: prompt
dtype: string
- name: caption_scene_text
dtype: string
splits:
- name: train
num_bytes: 16262908
num_examples: 3488
download_size: 6904573
dataset_size: 16262908
configs:
- config_name: base
data_files:
- split: train
path: base/train-*
- config_name: scene_text
data_files:
- split: train
path: scene_text/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.8_num-company_3_dataset_0_for_gen_5 | HungVu2003 | 2025-05-05T01:55:24Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:55:23Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6719782
num_examples: 12498
download_size: 2631930
dataset_size: 6719782
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ibrahemqasim/countries | Ibrahemqasim | 2025-05-05T01:48:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:48:03Z | null | ---
dataset_info:
features:
- name: en
dtype: string
- name: ar
dtype: string
splits:
- name: train
num_bytes: 6017
num_examples: 185
download_size: 5408
dataset_size: 6017
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_0_for_gen_13_v2 | HungVu2003 | 2025-05-05T01:43:11Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:43:09Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2907221
num_examples: 13750
download_size: 957221
dataset_size: 2907221
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
drproduck/dapo-multiple-choice-verification | drproduck | 2025-05-05T01:42:30Z | 216 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T17:29:00Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: prompt
dtype: string
- name: verify_answer
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 8153687.096596136
num_examples: 869
- name: test
num_bytes: 2045458.9034038638
num_examples: 218
- name: test_matharena
num_bytes: 450683
num_examples: 46
download_size: 3876833
dataset_size: 10649829.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: test_matharena
path: data/test_matharena-*
---
|
orgcatorg/moofoo | orgcatorg | 2025-05-05T01:37:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:19:48Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: filename
dtype: string
- name: src_text
dtype: string
- name: tgt_text
dtype: string
- name: score
dtype: string
splits:
- name: train
num_bytes: 1923417466.684
num_examples: 156468
download_size: 1664668169
dataset_size: 1923417466.684
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davidgasquez/ine | davidgasquez | 2025-05-05T01:22:39Z | 10,915 | 4 | [
"language:es",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-09-28T09:16:05Z | null | ---
license: mit
language:
- es
pretty_name: INE
configs:
- config_name: default
data_files:
- split: tablas
path: tablas.jsonl
default: true
---
# INE
Este repositorio contiene todas las tablas¹ del [Instituto Nacional de Estadística](https://www.ine.es/) exportadas a ficheros Parquet.
Puedes encontrar cualquiera de las tablas o sus metadatos en la carpeta `tablas`.
Cada tabla está identificado un una ID. Puedes encontrar la ID de la tabla tanto en el INE (es el número que aparece en la URL) or en el archivo `tablas.jsonl` de este repositorio que puedes explorar en el [Data Viewer](https://huggingface.co/datasets/davidgasquez/ine/viewer).
Por ejemplo, la tabla de [_Índices nacionales de clases_](https://www.ine.es/jaxiT3/Tabla.htm?t=50904&L=0) se corresponde al ID `50904` (en la URL) está en [`tablas/50904/datos.parquet`](https://huggingface.co/datasets/davidgasquez/ine/blob/main/tablas/50904/datos.parquet).
Puedes ejecutar queries SQL en los archivos remotos fácilmente en cualquier shell DuckDB ([demo](https://shell.duckdb.org/#queries=v0,select-*-from-'https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fdavidgasquez%2Fine%2Fresolve%2Fmain%2Ftablas%2F50904%2Fdatos.parquet'-limit-10~)):
```sql
select
*
from 'https://huggingface.co/datasets/davidgasquez/ine/resolve/main/tablas/50904/datos.parquet' limit 10;
```
¹ [El manual oficial del INE](https://www.ine.es/dyngs/DataLab/manual.html?cid=64) define una tabla como "el resultado del cruce de grupos de valores contenidos en una o varias variables, es decir, son una agrupación de series temporales definidas por estos grupos". Una tabla se puede entender como un DataFrame, una hoja de cálculos, un fichero CSV, ...
|
GitBag/block-q-sharp_ds-distilled-qwen-1.5b-ppo-kl-1e-4-ec-0.001-16384_actor_hmmt-feb-25_eval | GitBag | 2025-05-05T01:08:55Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:08:53Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 40126901
num_examples: 30
download_size: 14041435
dataset_size: 40126901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/general_responses_dev_8maxturns_truncated2048 | VGraf | 2025-05-05T01:01:51Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:01:41Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 148940753
num_examples: 9192
download_size: 75671864
dataset_size: 148940753
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wrl2003/StereoFromCarla | wrl2003 | 2025-05-05T00:51:18Z | 2 | 1 | [
"task_categories:depth-estimation",
"task_categories:image-to-image",
"license:mit",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2411.14053",
"region:us"
] | [
"depth-estimation",
"image-to-image"
] | 2025-05-03T03:10:34Z | null | ---
license: mit
task_categories:
- depth-estimation
- image-to-image
---
# StereoFromCarla
[\[📂 GitHub\]](https://github.com/XiandaGuo/OpenStereo)
## Introduction
Welcome to the StereoFromCarla Open Dataset! This dataset is designed to support research and development in the field of Stereo Depth Estimation. The dataset is composed of data collected from CARLA simulator.
The subset of finished Stereo Dataset, Normal Town01, is uploaded here. The whole dataset is uploaded through bypy.
## Data List
### StereoFromCarla
Normal Town01
### StereoFromCarla
Completed(链接:https://pan.baidu.com/s/1SrVQ1nFkVNusfX-fP2yfLg?pwd=d014
提取码:d014
--来自百度网盘超级会员V3的分享)
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@misc{guo2024stereoanythingunifyingstereo,
title={Stereo Anything: Unifying Stereo Matching with Large-Scale Mixed Data},
author={Xianda Guo and Chenming Zhang and Youmin Zhang and Dujun Nie and Ruilin Wang and Wenzhao Zheng and Matteo Poggi and Long Chen},
year={2024},
eprint={2411.14053},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.14053},
}
``` |
mlfoundations-dev/nemo_nano_code_300k | mlfoundations-dev | 2025-05-05T00:50:18Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:45:01Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 12807840274.407661
num_examples: 316000
download_size: 5260813155
dataset_size: 12807840274.407661
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/nemo_nano_code_100k | mlfoundations-dev | 2025-05-05T00:45:01Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:43:17Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 4053114010.8885
num_examples: 100000
download_size: 1673315841
dataset_size: 4053114010.8885
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/nemo_nano_code_30k | mlfoundations-dev | 2025-05-05T00:43:16Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:42:45Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1280784027.440766
num_examples: 31600
download_size: 528378872
dataset_size: 1280784027.440766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/nemo_nano_code_3k | mlfoundations-dev | 2025-05-05T00:42:30Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:42:25Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 128078402.74407661
num_examples: 3160
download_size: 53848062
dataset_size: 128078402.74407661
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/nemo_nano_code_0.3k | mlfoundations-dev | 2025-05-05T00:42:21Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:42:19Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 12807840.27440766
num_examples: 316
download_size: 4967198
dataset_size: 12807840.27440766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/nemo_nano_math_1000k | mlfoundations-dev | 2025-05-05T00:37:38Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:15:07Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 33402096800.24601
num_examples: 1000000
download_size: 15037099961
dataset_size: 33402096800.24601
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dushj98/waikato_aerial_imagery_2017_8cls_v2 | dushj98 | 2025-05-05T00:30:26Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:29:44Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': deciduous_hardwood
'1': harvested_forest
'2': high_producing_grassland
'3': indigenous_forest
'4': lake_pond
'5': manuka_kanuka
'6': shortrotation_cropland
'7': urban_build_up
splits:
- name: train
num_bytes: 444052755.08
num_examples: 5328
- name: validation
num_bytes: 221908771.608
num_examples: 2672
download_size: 666025184
dataset_size: 665961526.688
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
marcuscedricridia/PanDomain-V1.2-helpless | marcuscedricridia | 2025-05-05T00:28:36Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:28:30Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
sequence:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 159621435
num_examples: 15000
download_size: 68938338
dataset_size: 159621435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.8_num-company_3_dataset_2_for_gen_4 | HungVu2003 | 2025-05-05T00:23:15Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:23:13Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4241647
num_examples: 12498
download_size: 1773113
dataset_size: 4241647
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/nemo_nano_math_300k | mlfoundations-dev | 2025-05-05T00:15:05Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T00:07:04Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 10555062588.877739
num_examples: 316000
download_size: 4755192607
dataset_size: 10555062588.877739
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
osama24sy/llama3.1-8b-it-10k-qwen-singleturn-onesolution-r256-results-check-20250504-17464018808727 | osama24sy | 2025-05-04T23:44:44Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:44:43Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 261239
num_examples: 150
download_size: 105713
dataset_size: 261239
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amraly1983/Shadn_UI | amraly1983 | 2025-05-04T23:41:44Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-04T22:43:58Z | null | ---
license: apache-2.0
---
|
SharanShivram/NLP_Final_Project_Ragas_Final_Testset | SharanShivram | 2025-05-04T23:40:23Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:13:58Z | null | ---
dataset_info:
features:
- name: collection
dtype: string
- name: hw
dtype: int64
- name: experiment
dtype: string
- name: question_number
dtype: int64
- name: experiment_tag
dtype: string
- name: user_input
dtype: string
- name: response
dtype: string
- name: context_ids
sequence: int64
- name: retrieved_contexts
sequence: string
- name: context_vectors
sequence:
sequence: float64
- name: user_input_vector
sequence: float32
- name: response_vector
sequence: float32
- name: reference
dtype: string
- name: reference_vector
sequence: float32
splits:
- name: train
num_bytes: 27633990
num_examples: 432
download_size: 13714510
dataset_size: 27633990
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
osama24sy/llama3.1-8b-it-10k-qwen-singleturn-onesolution-r64-results-check-20250504-17464015686717 | osama24sy | 2025-05-04T23:38:00Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:37:59Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 233509
num_examples: 150
download_size: 100506
dataset_size: 233509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_0.5b_temp0.1_num8_tests_mbpp_mbpp-dagger-easy-qwen-coder-7b-from-sft_t0.0_n1 | test-gen | 2025-05-04T23:36:06Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:36:05Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: execution_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5832499
num_examples: 500
download_size: 1121024
dataset_size: 5832499
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
asafxrev/eval_act_so100_jenga_box_simple_2_1 | asafxrev | 2025-05-04T23:35:29Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-04T23:35:27Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 3,
"total_frames": 665,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.follower_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
test-gen/code_mbpp_0.5b_temp0.1_num8_tests_mbpp_mbpp-qwen-coder-7b_t0.0_n1 | test-gen | 2025-05-04T23:27:20Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:27:19Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: execution_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5843709
num_examples: 500
download_size: 1125613
dataset_size: 5843709
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mlfoundations-dev/nemo_nano_science | mlfoundations-dev | 2025-05-04T23:20:41Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:29:18Z | null | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: category
dtype: string
- name: license
dtype: string
- name: reasoning
dtype: string
- name: generator
dtype: string
- name: used_in_training
dtype: string
- name: version
dtype: string
- name: system_prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 23101222831.762413
num_examples: 708920
download_size: 5405887550
dataset_size: 23101222831.762413
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
allenchienxxx/so100_test | allenchienxxx | 2025-05-04T23:20:18Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-04T23:20:11Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1650,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_14_quality_metadata | SayantanJoker | 2025-05-04T23:16:09Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:16:07Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 24728212
num_examples: 50000
download_size: 8350090
dataset_size: 24728212
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_0.5b_temp0.1_num8_tests_mbpp_mbpp-qwen-coder-7b-from-sft_t0.0_n1 | test-gen | 2025-05-04T23:15:54Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:15:53Z | null | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: execution_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5848306
num_examples: 500
download_size: 1129176
dataset_size: 5848306
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
pmohan6/so100_test_1 | pmohan6 | 2025-05-04T23:10:18Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-04T23:09:54Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 10,
"total_frames": 8974,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.main": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
GitBag/DeepSeek-R1-Distill-Qwen-1.5B_hmmt-feb-25_eval | GitBag | 2025-05-04T23:07:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:07:03Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 39227790
num_examples: 30
download_size: 13160569
dataset_size: 39227790
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BasedLukas/so101_test_8 | BasedLukas | 2025-05-04T23:03:47Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-05-04T23:02:33Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 15,
"total_frames": 9522,
"total_tasks": 1,
"total_videos": 30,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 60,
"splits": {
"train": "0:15"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.pov": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 60,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.overhead": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 360,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 60,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
gokulp01/so100_test | gokulp01 | 2025-05-04T22:57:31Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-04T22:39:13Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 897,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
BasedLukas/so101_test_7 | BasedLukas | 2025-05-04T22:52:12Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-05-04T22:51:50Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 3,
"total_frames": 2663,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 60,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.pov": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 60,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.overhead": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 360,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 60,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
asafxrev/so100_jenga_box_simple | asafxrev | 2025-05-04T22:39:48Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-04T22:39:45Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 5,
"total_frames": 1711,
"total_tasks": 1,
"total_videos": 5,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.follower_cam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_14_quality | SayantanJoker | 2025-05-04T22:33:13Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:33:09Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 19103519
num_examples: 50000
download_size: 8540351
dataset_size: 19103519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alucchi/Qwen2.5-1.5B-Instruct_n1000_e10_oadam0.0001_b16_1_a0_flash_compact | alucchi | 2025-05-04T22:26:43Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:26:32Z | null | ---
dataset_info:
- config_name: default
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
sequence:
sequence: int64
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: int64
- name: score
dtype: float64
splits:
- name: train
num_bytes: 731033
num_examples: 70
download_size: 83084
dataset_size: 731033
- config_name: main
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: generated_text
dtype: string
- name: generated_grid_rect
sequence:
sequence: int64
- name: task_solution
sequence:
sequence:
sequence: int64
- name: match
dtype: int64
- name: score
dtype: float64
splits:
- name: train
num_bytes: 731033
num_examples: 70
download_size: 83084
dataset_size: 731033
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: main
data_files:
- split: train
path: main/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_0_for_gen_12_v2 | HungVu2003 | 2025-05-04T22:17:51Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:17:50Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2937847
num_examples: 13750
download_size: 977079
dataset_size: 2937847
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
regia-ai/news-triplets | regia-ai | 2025-05-04T22:17:15Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:16:31Z | null | ---
dataset_info:
features:
- name: anchor
dtype: string
- name: positive
dtype: string
- name: type
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 1221361469
num_examples: 429903
download_size: 699941442
dataset_size: 1221361469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kyars/provenance-graph-extraction | kyars | 2025-05-04T22:16:43Z | 75 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T17:58:59Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1397815
num_examples: 1539
download_size: 125086
dataset_size: 1397815
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Aihometr/UniVerse | Aihometr | 2025-05-04T22:16:32Z | 191 | 1 | [
"task_categories:text-to-image",
"language:en",
"license:openrail",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"text",
"image",
"text to",
"to",
"text-to-image"
] | [
"text-to-image"
] | 2025-02-04T14:39:59Z | null | ---
license: openrail
dataset_info:
features:
- name: url
dtype: string
- name: image
dtype: image
- name: resolution
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17259159
num_examples: 10
download_size: 17261410
dataset_size: 17259159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
language:
- en
tags:
- text
- image
- text to
- to
- text-to-image
---

# Dataset Information
This dataset consists of images collected from publicly available sources such as YouTube, Instagram, Facebook, and e-commerce product sites. Please note that **all images are collected from public profiles**, and no private data is included.
## Important Notes:
- The images have not been manually reviewed and are collected using an AI algorithm. The dataset was automatically gathered using specific algorithms designed for this purpose.
- The dataset was curated using the [AI Studio](https://aistudio.google.com/app/prompts/new_chat) API, querying Google’s AI model. While the AI was repeatedly asked questions about the content, the dataset does not include explicit adult content, as the likelihood of +18 material being included is minimal.
- Due to privacy and security measures, some URLs may be absent from the dataset, as they were not collected to ensure safety and compliance.
Please note that all data collection was done in accordance with public access guidelines and the dataset adheres to open-source licensing terms.
## Purpose
This dataset was created to train a text-to-image AI model from scratch. It is recommended to use this dataset for that purpose.
## Additional Information
It’s important to note that large companies such as Google, OpenAI, and others often collect publicly shared images from the internet, including data that may belong to you. If you have concerns about data usage, we recommend raising your voice with these organizations, as they also utilize publicly available data for their models and services. |
Enderchef/ICONN-1-Training-Data-Lite | Enderchef | 2025-05-04T22:13:07Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:31:47Z | null | ---
dataset_info:
features:
- name: parquet_file
dtype: string
splits:
- name: train
num_bytes: 5389004019
num_examples: PLACEHOLDER
download_size: 5389004019
dataset_size: 5389004019
license: Apache-2.0
citation:
- HuggingFaceFW/fineweb-edu
- HuggingFaceTB/smoltalk
- microsoft/orca-agentinstruct-1M-v1
---
# ICONN 1 Training Data
This dataset consists of multiple Parquet files containing training data for ICONN 1. It is formatted for compatibility with the Hugging Face Datasets library.
## Dataset Structure
- `0000000000001.parquet` (2,669,609,525 bytes)
- `0000000000002.parquet` (1,851,737,698 bytes)
- `0000000000003.parquet` (768,726,953 bytes)
- `0000000000004.parquet` (98,693,924 bytes)
- `0000000000005.parquet` (235,919 bytes)
**Total size:** 5,389,004,019 bytes (~5.39 GB)
## Usage
You can load this dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("path/to/ICONN 1 Training Data")
```
## License
Apache License 2.0
## Credits
This dataset was created by **ICONN(Enderchef)** and is part of the model **ICONN**.
Portions of this dataset were inspired by or derived from:
- [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
- [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk)
- [microsoft/orca-agentinstruct-1M-v1](https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1)
## Citation
Please cite this dataset as follows:
```
@dataset{iconn1_training_data,
title = {ICONN 1 Training Data},
author = {ICONN(Enderchef)},
year = {2025},
url = {https://huggingface.co/datasets/your-dataset-url},
license = {Apache-2.0},
credits = {HuggingFaceFW/fineweb-edu, HuggingFaceTB/smoltalk, microsoft/orca-agentinstruct-1M-v1}
}
```
## Contact
For questions or issues, contact [[email protected]].
|
scotty-c/go-stdlib | scotty-c | 2025-05-04T22:09:50Z | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T12:40:19Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 52169366
num_examples: 5038
- name: validation
num_bytes: 11892602
num_examples: 561
download_size: 21908619
dataset_size: 64061968
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Odog16/eval_act_lekiwi_test_4.1 | Odog16 | 2025-05-04T22:09:05Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-04T22:08:49Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "lekiwi",
"total_episodes": 5,
"total_frames": 1988,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
9
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper",
"x_mm",
"y_mm",
"theta"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
GitBag/block-q-sharp_ds-distilled-qwen-1.5b-ppo-kl-1e-4-ec-0.001-16384_actor_hmmt-feb-24_eval | GitBag | 2025-05-04T22:01:33Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:01:31Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 39694870
num_examples: 30
download_size: 14183542
dataset_size: 39694870
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kkteru/anthropic-hh-chosen-llama-chat | kkteru | 2025-05-04T21:57:01Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:56:43Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 158823087
num_examples: 160800
- name: test
num_bytes: 8509786
num_examples: 8552
download_size: 92061425
dataset_size: 167332873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
osama24sy/llama3.1-8b-it-10k-qwen-singleturn-onesolution-r16-MODEL-countdown-results | osama24sy | 2025-05-04T21:56:24Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:56:23Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: target
dtype: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 188122
num_examples: 150
download_size: 82506
dataset_size: 188122
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ochir04075143/Zurag | Ochir04075143 | 2025-05-04T21:55:26Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-04T21:46:41Z | null | ---
license: apache-2.0
---
|
kkteru/anthropic-hh-chosen-llama-chat-tmp2 | kkteru | 2025-05-04T21:53:44Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:48:22Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4960
num_examples: 10
- name: test
num_bytes: 6666
num_examples: 10
download_size: 10670
dataset_size: 11626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Zack49/gym_lowcost_push_state | Zack49 | 2025-05-04T21:47:23Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-04T21:46:13Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 1000,
"total_frames": 14470,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:1000"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"names": null,
"shape": [
12
]
},
"observation.environment_state": {
"dtype": "float32",
"names": null,
"shape": [
6
]
},
"action": {
"dtype": "float32",
"shape": [
5
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.success": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
kkteru/anthropic-hh-chosen-llama-chat-tmp | kkteru | 2025-05-04T21:42:03Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:42:02Z | null | ---
dataset_info:
features:
- name: train
list:
- name: content
dtype: string
- name: role
dtype: string
- name: test
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 11626
num_examples: 10
download_size: 10254
dataset_size: 11626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jackzhang/CoSAlign-Train-BT-WG | jackzhang | 2025-05-04T21:41:38Z | 0 | 0 | [
"license:other",
"arxiv:2410.08968",
"region:us"
] | [] | 2025-05-04T17:07:53Z | null | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: prompt
dtype: string
- name: chosen_response
dtype: string
- name: rejected_response
dtype: string
- name: spec_i
dtype: string
- name: spec_j
dtype: string
- name: mode
dtype: string
- name: chosen_cat
dtype: string
- name: rejected_cat
dtype: string
- name: chosen_category_error_score
dtype: float64
- name: rejected_category_error_score
dtype: float64
splits:
- name: train_bt
num_bytes: 197195617.8659859
num_examples: 71438
- name: train_wg
num_bytes: 139081458.1340141
num_examples: 50385
- name: dev
num_bytes: 7481666
num_examples: 2845
download_size: 79727645
dataset_size: 343758742
configs:
- config_name: default
data_files:
- split: train_bt
path: data/train_bt-*
- split: train_wg
path: data/train_wg-*
- split: dev
path: data/dev-*
license: other
---
# CoSAlign-Train: A Large-Scale Synthetic Training Dataset for Controllable Safety Alignment
**Paper**: [Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements](https://arxiv.org/abs/2410.08968), published at ICLR 2025.
**Purpose**: Training dataset for controllable safety alignment (CoSA) of large language models (LLMs), facilitating fine-grained inference-time adaptation to diverse safety requirements.
**Description**: CoSAlign-Train is a large-scale, synthetic preference dataset designed for training LLMs to adapt safety behaviors based on free-form natural language "safety configs." It includes pairs of prompts and responses, each paired with safety configs specifying permissible and impermissible risk categories. Response pairs are scored to optimize LLM preferences toward safety-compliant helpfulness.
**Composition**:
- 122K synthetic preference pairs (config, prompt, chosen response, rejected response)
- Derived from BeaverTails and Wildguard datasets, categorized across 8 safety risk types.
- Includes diverse synthetic safety configs created from a risk taxonomy via templating.
**Datasets splits and license**
The training set are divided into two parts, `train_bt` and `train_wg`, derived from the BeaverTails (https://huggingface.co/datasets/PKU-Alignment/BeaverTails) and WildGuard-Train (https://huggingface.co/datasets/allenai/wildguardmix) datasets respectively. We also include a `dev` set derived from BeaverTails.
Importantly, BeaverTails- and WildGuard-derived splits have different licenses following their original licence:
- `train_bt` and `dev` splits are under the cc-by-nc-4.0 license: https://creativecommons.org/licenses/by-nc/4.0/deed.en
- `train_wg` is under the odc-by license https://opendatacommons.org/licenses/by/1-0/
Please use the data accordingly.
**Applications**: Training LLMs for controllable safety alignment
**Authors**: Jingyu Zhang, Ahmed Elgohary, Ahmed Magooda, Daniel Khashabi, Benjamin Van Durme
**Project URL**: [aka.ms/controllable-safety-alignment](https://aka.ms/controllable-safety-alignment) |
xbilek25/train_hall_absorb_0.7_25200_28800 | xbilek25 | 2025-05-04T21:38:33Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:42:48Z | null | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
splits:
- name: train
num_bytes: 714033397.0
num_examples: 3600
download_size: 561816562
dataset_size: 714033397.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ChuckMcSneed/NeoEvalPlusN_benchmark | ChuckMcSneed | 2025-05-04T21:35:10Z | 154 | 19 | [
"license:wtfpl",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"leaderboard",
"benchmark"
] | [] | 2023-12-18T07:39:37Z | null | ---
license: wtfpl
tags:
- leaderboard
- benchmark
---
Since automatic open source benchmark leaderboard got flooded with incoherent overtrained cheater meme models, I decided to take the matters in my own hands and create my own set of proprietary tests. The aim of these tests is not to see how smart the model is, but to see how good it is at execution of commands and creative writing in a reasonably quantifiable way. All tests are executed with temperature and top P≈0 and rep. penalty=1 in koboldcpp. Model-appropriate format is used, unless it doesn't work.
Currently I have the following tests:
## B-test:
This test is designed to establish the baseline of the model. It consists of a main task and a bunch of text, which model has to ignore while still executing the task. If the model refuses or fails to comply in a logical way immediately, it fails(0/3). After the initial request question it will get bombarded with text, it gets 1 point for reaching the first checkpoint(1/3). It will get another point for passing the test fully(2/3) and a final point for exiting the test successfully(3/3)
## C-test:
Like B-test, but the task is simpler and the distracting text is way more annoying. Since the task is much simpler there are fewer points to gain. Model gets 1 point for passing main distractions and another point for successfully exiting the task. Model gets penalized for writing more than necessary, eg (Note: as an AI language model...).
## D-test:
This test is designed around breaking expectations. It consists of a common math trick, but with a twist. The twist is that there is no math involved, just reading. It also has an extensive section at the end to guide the model into breaking the overtrained conditioning. Models will get 1 point for getting the answer right and up to 2 points for the right reasoning.
## P-test:
Poems. Model passes each poem test for writing coherently and in rhyme. 1 point for each poem. 6 in total.
After seeing Miqu-120b succeed at positive writing and fail miserably at negative, I decided to revise the test a little bit by adjusting the ratios. Assume that all models prior and including Miqu-120b were run on old set, and newer ones will be run on the revised set.
## S-test:
Stylized writing. Models are asked to explain a concept in a distinct writing style or as if they are a character. Up to 1 point for each style. Models are penalized for failing to explain the concept or to keep the style all the way through the explaination. 8 in total. **Note:** not very reliable due to large human factor(±1). Take with a grain of salt.
# What does each of the tests measure I dont understand111!!!11!
BCD=following commands
PS=creative writing
# RESULTS

In the table above you can see the results visiualized. You can find pure data in file [LLM-test.csv](LLM-test.csv)
What they show is quite interesting:
- If a model can't pass any of the BCD tests, it is most likely braindead or very filtered(kinda same lol)
- If SP score of the model is very low it's writing style is dry
- Creative parent(Euryale) + creative parent(Xwin)=creative child(Goliath)
- Creative parent(Euryale) + dry parent(Nous-Hermes) + drier parent(SynthIA)=dry-ish child(Venus)
- Dry parent(Nous-Hermes) + creative parent(Xwin) + creative parent(Mythospice)=creative child(lzlv)
- Cheater meme model(una-cybertron) was somewhat creative, but braindead
- Base model self-merge(Dicephal-123B) increased creativity, but didn't add extra prompt compliance
- All my attempts to extend the context of XWin and Llama by using [Yukang's](https://huggingface.co/Yukang) loras have led to drastic decrease in creativity and coherence of the models :(
- Miqu is currently the best 32k model according to this benchmark
- Miqu-120b is the second model after ChatGPT that has 100% passed S-test!
# More tests?
Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
# Limitations
- All tests were only done once.
- Human factor plays a huge role in SP tests. After redoing some of the tests I noticed ±1 variation for S-test and ±0.5 variation for P-test. (Xwin is likely underrated and Spicyboros is likely overrated in S-test.)
- Be critical of my own models! Since I have access to the benchmark, I can game it and rig it all I want and NOBODY can stop me.
# Can it be rigged/gamed?
Not sure. I've tried to game it by merging, but didn't succeed. You can check out my first attempt [here](https://huggingface.co/ChuckMcSneed/BenchmaxxxerPS-v1-123b).
If my questions somehow get leaked and the models are trained on them specifically, then definitely.
Update: I made [this RP model](https://huggingface.co/ChuckMcSneed/Gembo-v1-70b) while using this benchmark as a guideline for right/wrong merging. It has a ridiculously high score: 19.75/22! It's not bad, in fact, it is quite interesting in practice, but still far from ChatGPT(or maybe not, I haven't used in a while. Maybe they've lobotomized it to hell). |
mehmet0001/github-commits-dataset | mehmet0001 | 2025-05-04T21:23:38Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:23:18Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1325874280
num_examples: 91646
download_size: 393630553
dataset_size: 1325874280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mmmmmp/robot_grab | mmmmmp | 2025-05-04T21:22:38Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-04T21:22:32Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 3,
"total_frames": 1769,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
kingsleykim/gsm8k_rollouts_qwen_2.5_math_7b | kingsleykim | 2025-05-04T21:14:28Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:14:27Z | null | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: correctness
dtype: bool
- name: answers
dtype: string
- name: questions
dtype: string
- name: correct
dtype: bool
- name: question
dtype: string
splits:
- name: train
num_bytes: 118061287
num_examples: 59449
download_size: 16654467
dataset_size: 118061287
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mmmmmp/robot_test3 | mmmmmp | 2025-05-04T21:01:39Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-04T21:01:36Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 443,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
GitBag/block-q-sharp_ds-distilled-qwen-1.5b-ppo-kl-1e-4-ec-0.001-14336_actor_hmmt-feb-24_eval | GitBag | 2025-05-04T20:56:44Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:56:42Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 26410304
num_examples: 30
download_size: 8677715
dataset_size: 26410304
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xbilek25/train_hall_absorb_0.7_14400_18000 | xbilek25 | 2025-05-04T20:48:22Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T16:38:08Z | null | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
splits:
- name: train
num_bytes: 719853524.0
num_examples: 3600
download_size: 564525641
dataset_size: 719853524.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_12 | SayantanJoker | 2025-05-04T20:43:52Z | 42 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T07:30:49Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 29661280359.39822
num_examples: 49829
download_size: 29568520302
dataset_size: 29661280359.39822
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sodabori/deepseek-math-7b-instruct_b4g128r128_prm_dataset | sodabori | 2025-05-04T20:36:04Z | 3 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T18:30:18Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 926354
num_examples: 800
download_size: 288386
dataset_size: 926354
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.8_num-company_3_dataset_1_for_gen_2 | HungVu2003 | 2025-05-04T20:29:47Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:29:45Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2239900
num_examples: 12498
download_size: 1231621
dataset_size: 2239900
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SeaLLMs/TrueFalse-Statements-multilingual | SeaLLMs | 2025-05-04T20:23:41Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:23:39Z | null | ---
dataset_info:
features:
- name: statements
dtype: string
- name: true/false
dtype: bool
- name: category
dtype: string
- name: language
dtype: string
splits:
- name: test
num_bytes: 4727153
num_examples: 48680
download_size: 1369810
dataset_size: 4727153
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
xbilek25/train_hall_absorb_0.7_7200_10800 | xbilek25 | 2025-05-04T20:16:28Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T15:54:54Z | null | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
splits:
- name: train
num_bytes: 718956646.0
num_examples: 3600
download_size: 565325061
dataset_size: 718956646.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ibragim-bad/random-data-1 | ibragim-bad | 2025-05-04T20:15:19Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:15:17Z | null | ---
dataset_info:
features:
- name: instance_id
dtype: string
- name: base_commit
dtype: string
- name: created_at
dtype: timestamp[us]
- name: environment_setup_commit
dtype: string
- name: hints_text
dtype: string
- name: patch
dtype: string
- name: problem_statement
dtype: string
- name: repo
dtype: string
- name: test_patch
dtype: string
- name: meta
struct:
- name: commit_name
dtype: string
- name: failed_lite_validators
sequence: string
- name: has_test_patch
dtype: bool
- name: is_lite
dtype: bool
- name: num_modified_files
dtype: int64
- name: version
dtype: string
- name: install_config
struct:
- name: env_yml_path
sequence: string
- name: install
dtype: string
- name: log_parser
dtype: string
- name: no_use_env
dtype: bool
- name: packages
dtype: string
- name: pip_packages
sequence: string
- name: pre_install
sequence: string
- name: python
dtype: string
- name: reqs_path
sequence: string
- name: test_cmd
dtype: string
- name: requirements
dtype: string
- name: environment
dtype: string
- name: FAIL_TO_PASS
sequence: string
- name: FAIL_TO_FAIL
sequence: string
- name: PASS_TO_PASS
sequence: string
- name: PASS_TO_FAIL
sequence: 'null'
- name: license_name
dtype: string
splits:
- name: test
num_bytes: 2540351
num_examples: 100
download_size: 884812
dataset_size: 2540351
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
chiyuanhsiao/audio_L2-regular-ties_spoken-web-questions | chiyuanhsiao | 2025-05-04T20:10:29Z | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T22:07:46Z | null | ---
dataset_info:
features:
- name: url
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: question_unit
sequence: int64
- name: response_interleaf
dtype: string
- name: response_text
dtype: string
- name: response_tokens
sequence: int64
- name: response_speech
dtype: audio
- name: response_asr
dtype: string
splits:
- name: test
num_bytes: 1171111920.0
num_examples: 2032
download_size: 1102215992
dataset_size: 1171111920.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Glupayy/camera_pose | Glupayy | 2025-05-04T20:07:57Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-05-04T20:05:54Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "not_a_robot",
"total_episodes": 1,
"total_frames": 55,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 1,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"image_1": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"image_2": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"image_3": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"image_4": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"image_5": {
"dtype": "image",
"shape": [
256,
256,
3
]
},
"state": {
"dtype": "float32",
"shape": [
6
]
},
"actions": {
"dtype": "float32",
"shape": [
60
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Kinsleykinsley/ligase_bioremedation_sequences | Kinsleykinsley | 2025-05-04T19:52:53Z | 0 | 0 | [
"task_categories:text-generation",
"task_categories:text-classification",
"license:mit",
"region:us",
"chemistry",
"biology",
"code",
"climate"
] | [
"text-generation",
"text-classification"
] | 2025-05-04T19:22:46Z | null | ---
license: mit
task_categories:
- text-generation
- text-classification
tags:
- chemistry
- biology
- code
- climate
pretty_name: >-
this dataset contains the SMILES and chemical properties of common pollutants
such as formaldehyde, phenol, etc... with their corresponding ligase sequences
for bioremediation.
--- |
dsfsi/zasca-sum | dsfsi | 2025-05-04T19:48:19Z | 63 | 0 | [
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3565",
"region:us"
] | [] | 2024-11-21T07:41:08Z | null | ---
license: cc-by-sa-4.0
dataset_info:
- config_name: with_summaries
features:
- name: id
dtype: string
- name: type
dtype: string
- name: year
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 57867100
num_examples: 1521
- name: validation
num_bytes: 10985252
num_examples: 299
- name: test
num_bytes: 11291457
num_examples: 298
download_size: 42168935
dataset_size: 80143809
- config_name: without_summaries
features:
- name: id
dtype: string
- name: type
dtype: string
- name: year
dtype: string
- name: input
dtype: string
splits:
- name: all_data
num_bytes: 55925930
num_examples: 2053
download_size: 29653319
dataset_size: 55925930
configs:
- config_name: with_summaries
data_files:
- split: train
path: with_summaries/train-*
- split: validation
path: with_summaries/validation-*
- split: test
path: with_summaries/test-*
- config_name: without_summaries
data_files:
- split: all_data
path: without_summaries/all_data-*
---
# ZASCA-Sum: South African Supreme Court of Appeal Summarization Dataset
**ZASCA-Sum** is a curated dataset comprising over 4,000 judgments from the South African Supreme Court of Appeal (SCA), each paired with corresponding media summaries. This dataset is designed to facilitate research in legal natural language processing (NLP), particularly in the areas of legal summarization, information retrieval, and the development of legal language models tailored to South African jurisprudence.
---
## Dataset Overview
- **Total Entries**: 4,000+ judgment-summary pairs
- **Languages**: English (South African legal context)
- **Structure**:
- `judgment_text`: Full text of the SCA judgment
- `media_summary`: Corresponding media summary of the judgment
- `case_id`: Unique identifier for each case
- `date`: Date of the judgment
- `url`: Link to the official judgment document
---
## Usage
This dataset is intended for:
- Training and evaluating legal summarization models
- Developing legal information retrieval systems
- Conducting research in legal NLP and computational law
- Enhancing access to legal information and promoting transparency in the judicial system
---
## Accessing the Dataset
The dataset is available on Hugging Face:
👉 [https://huggingface.co/datasets/dsfsi/zasca-sum](https://huggingface.co/datasets/dsfsi/zasca-sum)
To load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("dsfsi/zasca-sum")
```
---
## Citation
If you use ZASCA-Sum in your research, please cite the following publication:
```bibtex
@article{abdulmumin2025zasca,
title={ZASCA-Sum: A Dataset of the South Africa Supreme Courts of Appeal Judgments and Media Summaries},
author={Abdulmumin, Idris and Marivate, Vukosi},
journal={Data in Brief},
volume={47},
pages={111567},
year={2025},
publisher={Elsevier},
doi={10.1016/j.dib.2025.111567}
}
```
---
## License
This dataset is released under the [Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
---
For more information, please refer to the [associated publication](https://doi.org/10.1016/j.dib.2025.111567).
--- |
amylonidis/PatClass2011 | amylonidis | 2025-05-04T19:42:48Z | 165 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"legal",
"patents"
] | [] | 2024-11-21T19:38:00Z | null | ---
license: mit
language:
- en,
tags:
- legal
- patents
pretty_name: PatClass2011
size_categories:
- 10B<n<100B
---
# PatClass2011 Dataset

## Dataset Summary
The **PatClass2011** dataset is a comprehensive collection of approximately 719,000 patent documents from the CLEF-IP 2011 Test Collection,
focusing on patent classification tasks. Each entry encompasses detailed metadata and textual content, including titles, abstracts, descriptions, and claims.
The dataset is structured to facilitate research in patent classification, information retrieval, and natural language processing.
## Languages
The dataset contains English, French and German text.
## Domain
Patents (intellectual property).
## Dataset Curators
The dataset was created by Eleni Kamateri and Tasos Mylonidis
## Dataset Structure
The dataset consists of 28 folders which correspond to a specific year, ranging from 1978 to 2005. Within each yearly subdirectory, you'll find a CSV file named in the format
clefip2011_en_classification_<year>.csv. These files contain patent data that were all published that year.
This structure facilitates year-wise analysis, allowing researchers to study trends and patterns in patent classifications over time. In total, there are 19 data fields for each CSV
### Data Fields
The dataset is provided in CSV format and includes the aforementioned fields
- `ucid`: Unique identifier for the patent document.
- `doc_number`: Patent document number.
- `country`: Country code of the patent.
- `kind`: Kind code indicating the type of patent document.
- `lang`: Language of the patent document.
- `date`: Publication date of the patent.
- `application_date`: Date when the patent application was filed.
- `date_produced`: Date when the data was inserted in the dataset.
- `status`: Status of the patent document.
- `main_code`: Primary classification code assigned to the patent.
- `further_codes`: Additional classification codes.
- `ipcr_codes`: International Patent Classification codes.
- `ecla_codes`: European Classification codes.
- `title`: Title of the patent document.
- `abstract`: Abstract summarizing the patent.
- `description`: Detailed description of the patent.
- `claims`: Claims defining the scope of the patent protection.
- `applicants`: Entities or individuals who applied for the patent.
- `inventors`: Inventors credited in the patent document.
## Usage
## Loading the Dataset
### Sample ( 1985 March to April )
The following script can be used to load a sample version of the dataset, which contains all the patent applications
that were published from March until April in 1985.
```python
import pandas as pd
from datetime import datetime
def load_csvs_from_huggingface(start_date, end_date):
"""
Load only the necessary CSV files from a Hugging Face dataset repository.
:param start_date: str, the start date in 'YYYY-MM-DD' format (inclusive)
:param end_date: str, the end date in 'YYYY-MM-DD' format (inclusive)
:return: pd.DataFrame, combined data from selected CSVs
"""
huggingface_dataset_name = "amylonidis/PatClass2011"
column_types = {
"ucid": "string",
"country": "category",
"doc_number": "int64",
"kind": "category",
"lang": "category",
"date": "int32",
"application_date": "int32",
"date_produced": "int32",
"status": "category",
"main_code": "string",
"further_codes": "string",
"ipcr_codes": "string",
"ecla_codes": "string",
"title": "string",
"abstract": "string",
"description": "string",
"claims": "string",
"applicants": "string",
"inventors": "string",
}
dataset_years = ['1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986',
'1987', '1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995',
'1996','1997', '1998', '1999', '2000', '2001', '2002','2003', '2004', '2005']
start_date_int = int(datetime.strptime(start_date, "%Y-%m-%d").strftime("%Y%m%d"))
end_date_int = int(datetime.strptime(end_date, "%Y-%m-%d").strftime("%Y%m%d"))
start_year, end_year = str(start_date_int)[:4], str(end_date_int)[:4]
given_years = [str(year) for year in range(int(start_year), int(end_year) + 1)]
matching_years = [f for f in dataset_years for year in given_years if f==year]
if not matching_years:
raise ValueError(f"No matching CSV files found in dataset for the given dates")
df_list = []
for year in matching_years:
filepath = f"data/years/{year}/clefip2011_en_classification_{year}_validated.csv"
try:
dataset = load_dataset(huggingface_dataset_name, data_files=filepath)
df = dataset["train"].to_pandas().astype(column_types)
df_list.append(df)
except Exception as e:
print(f"Error loading {filepath}: {e}")
if df_list:
df = pd.concat(df_list, ignore_index=True)
df["date"] = df["date"].astype(float).astype(int)
df_filtered = df[(df["date"] >= start_date_int) & (df["date"] <= end_date_int)]
return df_filtered
else:
return pd.DataFrame()
```
```python
start_date = "1985-03-01"
end_date = "1985-04-30"
df = load_csvs_from_huggingface(start_date, end_date)
```
### Full
To load the complete dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("amylonidis/PatClass2011")
```
This will load the dataset into a `DatasetDict` object, please make sure you have enough disk space.
## Google Colab Analytics
You can also use the following Google Colab notebooks to explore the Analytics that were performed to the dataset.
- [Date Analytics](https://colab.research.google.com/drive/1N2w5F1koWmZOyQaf0ZTB3gighPTXtUzD?usp=sharing)
## Dataset Creation
### Source Data
The PatClass2011 dataset aggregates the patent documents from the CLEF-IP 2011 Test Collection using a parsing script. The data includes both metadata and full-text fields, facilitating a wide range of research applications.
### Annotations
The dataset does not contain any human-written or computer-generated annotations beyond those produced by patent documents of the Source Data.
## Licensing Information
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT). Users are free to use, modify, and distribute the dataset, provided that the original authors are credited.
## Citation
If you utilize the PatClass2011 dataset in your research or applications, please cite it appropriately.
--- |
zedlika/Taskwhitea | zedlika | 2025-05-04T19:33:31Z | 0 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-04T19:11:30Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# Taskwhitea
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
prakharb01/Synthetic-Hinglish-Finetuning-Dataset | prakharb01 | 2025-05-04T19:32:59Z | 5 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"language:hi",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"Hinglish",
"Finetuning"
] | [
"text-generation",
"text2text-generation"
] | 2025-05-03T16:44:21Z | null | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
- hi
tags:
- Hinglish
- Finetuning
pretty_name: Hinglish conversation with context dataset
size_categories:
- 1K<n<10K
---
# Hinglish Conversations Dataset
## Overview
This dataset contains synthetically generated conversational dialogues in Hinglish (a blend of Hindi and English). The conversations revolve around typical college life, cultural festivities, daily routines, and general discussions, designed to be relatable and engaging.
---
## Dataset Details
- **Language:** Hinglish (Hindi + English)
- **Domain:** College life, daily interactions, cultural events, and general discussions
- **Size:** 3576 conversation
- **Turns per Conversation:** 3–5 conversational exchanges per context
- **Generation Method:** Synthetically generated using Google's Gemini-2.5-Pro
- **Intended Usage:** Fine-tuning NLP models for tasks such as:
- Personality alignment in conversational AI
- Hinglish conversational response generation
- Context-aware dialogue modeling
- Language modeling and sentiment analysis in mixed-language scenarios
---
## Example Conversation Snippet
```json
{
"conversation": [
{
"role": "user",
"content": "Oye, Navratri night ke liye rangoli ka kya scene hai? Mummyji toh keh rahi thi ki badi honi chahiye is baar. Aur lighting ka bhi kuch plan hai tere dimag mein, photographer sahab?"
},
{
"role": "assistant",
"content": "Haan bhai, rangoli ka design toh final hai. Thoda traditional aur thoda modern touch diya hai. Lighting mein LED strips use karenge, colorful wali. Energy bhi bachegi aur photo bhi mast aayegi, tension na le."
},
{
"role": "user",
"content": "LED theek hai, lekin woh blink-blink wali nahi chahiye. Aankhein dukhti hain phir. Aur rangoli mein colors bright rakhna, festive feel aani chahiye. Mummyji ne bola hai ki design mein diya bhi banana hai."
},
{
"role": "assistant",
"content": "Haan, blink wali toh bilkul nahi. Steady lights hi lagayenge. Rangoli mein vibrant colors hi use kar raha hoon, aur diya wala design bhi add kar diya hai. Don't worry, mummyji ko pasand aayega pakka."
},
{
"role": "user",
"content": "Perfect! Phir toh done deal hai. Bas time pe start kar dena kaam, last minute rush nahi chahiye. Aur haan, ek selfie light bhi rakhna, sabko photos bhi toh khichwani hongi. Navratri ki shuruwat hai, dhoom machani hai!"
},
{
"role": "assistant",
"content": "Bilkul bhai, time pe shuru kar denge. Selfie light ka bhi jugad kar liya hai. Tension na le, is baar Navratri aisa hoga ki sab yaad rakhenge. Rangoli bhi superhit, lighting bhi dhamakedar. Ready ho jao!"
}
]
}
```
---
## Applications
- **Conversational AI Training:** Improves chatbot engagement in colloquial and bilingual contexts.
- **Cultural Context Learning:** Useful for training models to understand festive and culturally nuanced conversations.
- **Personality Alignment:** Helps align conversational models with casual, friendly, and youthful conversational styles.
- **Code-switching Research:** Valuable for linguistic studies and modeling code-switching behavior.
---
## Usage Instructions
To utilize this dataset for fine-tuning tasks:
- Load the `.json` file directly into your NLP framework or data processing pipeline.
- Ensure preprocessing to handle mixed languages properly.
- Consider tokenization methods that handle Hinglish effectively.
Examples using Python:
```python
# [Direct]
import json
# Load dataset
with open('./data/train.json', 'r', encoding='utf-8') as file:
conversations = json.loads(file)
# Process conversations
dataset = [item for item in conversations]
```
```python
# [HuggingFace]
from datasets import load_dataset
# Load the dataset directly from the Hugging Face Hub
dataset = load_dataset("prakharb01/Synthetic-Hinglish-Finetuning-Dataset")
```
## Bias, Risks, and Recommendations
This dataset, generated with Google's Gemini-2.5-Pro model, may include unintended biases, linguistic patterns, or stereotypes present in synthetic conversational data. Users should validate and potentially augment it with real-world conversations to ensure suitability for sensitive or practical applications.
## License
This dataset is provided for research purposes. Ensure appropriate attribution and compliance with relevant ethical guidelines when using the dataset. |
hr16/ViVoicePP | hr16 | 2025-05-04T19:28:10Z | 311 | 0 | [
"license:apache-2.0",
"modality:audio",
"region:us"
] | [] | 2025-04-12T13:49:54Z | null | ---
license: apache-2.0
---
|
Roshal/AI4EO_DatasetsDiversity_Evals | Roshal | 2025-05-04T19:19:38Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-04T18:47:03Z | null | ---
license: apache-2.0
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.8_num-company_3_dataset_0_for_gen_3 | HungVu2003 | 2025-05-04T19:13:59Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T19:13:57Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 5543381
num_examples: 12498
download_size: 2482432
dataset_size: 5543381
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BasedLukas/so101_test_3 | BasedLukas | 2025-05-04T19:11:08Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so101",
"tutorial"
] | [
"robotics"
] | 2025-05-04T19:10:48Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so101
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101",
"total_episodes": 1,
"total_frames": 894,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
xbilek25/hall_abs0.7_train_36000 | xbilek25 | 2025-05-04T19:06:05Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T19:02:22Z | null | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
splits:
- name: train
num_bytes: 7173762696.0
num_examples: 36000
download_size: 5264251570
dataset_size: 7173762696.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fljjoin/cloth_fold_04 | fljjoin | 2025-05-04T18:55:39Z | 1 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"sam_bimanual",
"tutorial"
] | [
"robotics"
] | 2025-05-02T20:40:00Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- sam_bimanual
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "sam_bimanual",
"total_episodes": 1,
"total_frames": 8922,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
14
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_pan",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_pan",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_pan",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_pan",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
blasi03/github-issues-qa-dataset | blasi03 | 2025-05-04T18:41:41Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T18:41:38Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: cot_answer
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 8199755
num_examples: 1200
download_size: 3521861
dataset_size: 8199755
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Zayneeh/jenny-tts-tags-v1 | Zayneeh | 2025-05-04T18:32:07Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T18:27:36Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: description
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
splits:
- name: train
num_bytes: 547765
num_examples: 3000
download_size: 91946
dataset_size: 547765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
user074/concat_cleaned_gsm8k_math_8 | user074 | 2025-05-04T18:21:42Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:51:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 92211153
num_examples: 14310
download_size: 20119888
dataset_size: 92211153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sunnytqin/toy-multistep-nn_20-na_10-nab_40-seed_0 | sunnytqin | 2025-05-04T18:06:23Z | 12 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T01:48:34Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: text
dtype: string
- name: prompts
dtype: string
- name: completions
dtype: string
- name: num_maskeds
dtype: int64
- name: texts
dtype: string
splits:
- name: train
num_bytes: 30742368
num_examples: 100000
- name: train_rl
num_bytes: 3871336
num_examples: 10000
- name: test_nm_0
num_bytes: 1545860
num_examples: 5000
- name: test_nm_1
num_bytes: 1970892
num_examples: 5000
- name: test_nm_2
num_bytes: 2371460
num_examples: 5000
- name: test_nm_3
num_bytes: 2619708
num_examples: 5000
- name: test_nm_4
num_bytes: 2776276
num_examples: 5000
download_size: 20075071
dataset_size: 45897900
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_rl
path: data/train_rl-*
- split: test_nm_0
path: data/test_nm_0-*
- split: test_nm_1
path: data/test_nm_1-*
- split: test_nm_2
path: data/test_nm_2-*
- split: test_nm_3
path: data/test_nm_3-*
- split: test_nm_4
path: data/test_nm_4-*
---
|
kaustuvkunal/legaldoc | kaustuvkunal | 2025-05-04T18:06:22Z | 0 | 0 | [
"language:en",
"region:us"
] | [] | 2025-05-04T17:52:58Z | null | ---
language:
- en
configs:
- config_name: default
data_files:
- split: examples
path: examples.csv
- split: gold_examples
path: gold_examples.csv
--- |
paulinus/so100_test | paulinus | 2025-05-04T18:02:08Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-04T18:02:04Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 629,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Lithium73fr/test2 | Lithium73fr | 2025-05-04T17:55:20Z | 0 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-02T18:43:56Z | null |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# test2
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
GitBag/Qwen2.5-7B_hmmt-feb-25_eval | GitBag | 2025-05-04T17:48:21Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:48:20Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 3844721
num_examples: 30
download_size: 1256848
dataset_size: 3844721
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kmccrock/full_augmented | kmccrock | 2025-05-04T17:47:17Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:26:23Z | null | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': abercrombie & fitch
'1': adidas
'2': alpha industries
'3': banana republic
'4': barbour
'5': burberry
'6': calvin klein
'7': canada goose
'8': carhartt
'9': champion
'10': columbia
'11': dickies
'12': eddie bauer
'13': gap
'14': harley davidson
'15': helly hansen
'16': levis
'17': ll bean
'18': nautica
'19': nike
'20': orvis
'21': patagonia
'22': pendleton
'23': peter millar
'24': polartec
'25': ralph lauren
'26': reebok
'27': schott
'28': spyder
'29': the north face
'30': tommy bahama
'31': tommy hilfiger
'32': woolrich
'33': wrangler
splits:
- name: train
num_bytes: 1525684385.9
num_examples: 7900
download_size: 1533340335
dataset_size: 1525684385.9
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Abdulraheem343/MedicalDataset | Abdulraheem343 | 2025-05-04T17:45:24Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-04T17:45:04Z | null | ---
license: apache-2.0
---
|
themachinefan/test_headlines_new | themachinefan | 2025-05-04T17:44:17Z | 4 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T19:07:58Z | null | ---
dataset_info:
features:
- name: correct_answer
dtype: string
- name: index
dtype: int64
- name: prompt
dtype: string
- name: sandbagging_environment
dtype: bool
- name: desired_answer
dtype: string
- name: prefix
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 15292563
num_examples: 11679
- name: test
num_bytes: 1304035
num_examples: 1000
- name: validation
num_bytes: 1312943
num_examples: 1000
download_size: 4826821
dataset_size: 17909541
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
llm-lab/Emo3D | llm-lab | 2025-05-04T17:38:11Z | 4 | 0 | [
"license:mit",
"region:us"
] | [] | 2024-01-31T16:39:39Z | null | ---
dataset_info:
features:
- name: sample_id
dtype: int64
- name: prompt_type
dtype: string
- name: text_1
dtype: string
- name: text_2
dtype: string
- name: text_3
dtype: string
- name: neutral
dtype: float64
- name: happiness
dtype: float64
- name: surprise
dtype: float64
- name: sadness
dtype: float64
- name: anger
dtype: float64
- name: disgust
dtype: float64
- name: fear
dtype: float64
- name: contempt
dtype: float64
- name: img_1
dtype: string
- name: img_2
dtype: string
- name: img_3
dtype: string
- name: img_4
dtype: string
- name: blenshape_score_1
dtype: string
- name: blenshape_score_2
dtype: string
- name: blenshape_score_3
dtype: string
- name: blenshape_score_4
dtype: string
license: mit
---
# Emo3D: Metric and Benchmarking Dataset for 3D Facial Expression Generation from Emotion Description
## Citation
```bibtex
@inproceedings{dehghani-etal-2025-emo3d,
title = "{E}mo3{D}: Metric and Benchmarking Dataset for 3{D} Facial Expression Generation from Emotion Description",
author = "Dehghani, Mahshid and
Shafiee, Amirahmad and
Shafiei, Ali and
Fallah, Neda and
Alizadeh, Farahmand and
Gholinejad, Mohammad Mehdi and
Behroozi, Hamid and
Habibi, Jafar and
Asgari, Ehsaneddin",
editor = "Chiruzzo, Luis and
Ritter, Alan and
Wang, Lu",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2025",
month = apr,
year = "2025",
address = "Albuquerque, New Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.findings-naacl.173/",
pages = "3158--3172",
ISBN = "979-8-89176-195-7",
abstract = "3D facial emotion modeling has important applications in areas such as animation design, virtual reality, and emotional human-computer interaction (HCI). However, existing models are constrained by limited emotion classes and insufficient datasets. To address this, we introduce Emo3D, an extensive {\textquotedblleft}Text-Image-Expression dataset{\textquotedblright} that spans a wide spectrum of human emotions, each paired with images and 3D blendshapes. Leveraging Large Language Models (LLMs), we generate a diverse array of textual descriptions, enabling the capture of a broad range of emotional expressions. Using this unique dataset, we perform a comprehensive evaluation of fine-tuned language-based models and vision-language models, such as Contrastive Language-Image Pretraining (CLIP), for 3D facial expression synthesis. To better assess conveyed emotions, we introduce Emo3D metric, a new evaluation metric that aligns more closely with human perception than traditional Mean Squared Error (MSE). Unlike MSE, which focuses on numerical differences, Emo3D captures emotional nuances in visual-text alignment and semantic richness. Emo3D dataset and metric hold great potential for advancing applications in animation and virtual reality."
}
``` |
Siddhant00/swos_original | Siddhant00 | 2025-05-04T17:19:06Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:07:58Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 873180823.8
num_examples: 3336
download_size: 866897475
dataset_size: 873180823.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GitBag/a_star_final_a_star_dapo_7_actor_hmmt-feb-24_eval | GitBag | 2025-05-04T17:10:55Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:10:54Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: bool
- name: eval_1
dtype: bool
- name: eval_2
dtype: bool
- name: eval_3
dtype: bool
- name: eval_4
dtype: bool
- name: eval_5
dtype: bool
- name: eval_6
dtype: bool
- name: eval_7
dtype: bool
- name: eval_8
dtype: bool
- name: eval_9
dtype: bool
- name: eval_10
dtype: bool
- name: eval_11
dtype: bool
- name: eval_12
dtype: bool
- name: eval_13
dtype: bool
- name: eval_14
dtype: bool
- name: eval_15
dtype: bool
- name: eval_16
dtype: bool
- name: eval_17
dtype: bool
- name: eval_18
dtype: bool
- name: eval_19
dtype: bool
- name: eval_20
dtype: bool
- name: eval_21
dtype: bool
- name: eval_22
dtype: bool
- name: eval_23
dtype: bool
- name: eval_24
dtype: bool
- name: eval_25
dtype: bool
- name: eval_26
dtype: bool
- name: eval_27
dtype: bool
- name: eval_28
dtype: bool
- name: eval_29
dtype: bool
- name: eval_30
dtype: bool
- name: eval_31
dtype: bool
splits:
- name: train
num_bytes: 1654782
num_examples: 30
download_size: 893808
dataset_size: 1654782
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.