|
--- |
|
dataset_info: |
|
features: |
|
- name: spike_counts |
|
sequence: |
|
sequence: uint8 |
|
- name: subject_id |
|
dtype: string |
|
- name: session_id |
|
dtype: string |
|
- name: segment_id |
|
dtype: string |
|
- name: source_dataset |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 33983349435.45733 |
|
num_examples: 4141 |
|
- name: test |
|
num_bytes: 344675362.5426727 |
|
num_examples: 42 |
|
download_size: 5954621801 |
|
dataset_size: 34328024798 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
tags: |
|
- v1.0 |
|
--- |
|
|
|
# The Neural Pile (primate) |
|
|
|
This dataset contains 34.3 billion tokens of curated spiking neural activity data recorded from primates. |
|
The code and detailed instructions for creating this dataset from scratch can be found at [this GitHub repository](https://github.com/eminorhan/neural-pile-primate). |
|
The dataset takes up about 34 GB on disk when stored as memory-mapped `.arrow` files (which is the format used by the local caching system of the Hugging Face |
|
`datasets` library). The dataset comes with separate `train` and `test` splits. You can load, *e.g.*, the `train` split of the dataset as follows: |
|
```python |
|
ds = load_dataset("eminorhan/neural-pile-primate", num_proc=32, split='train') |
|
``` |
|
and display the first data row: |
|
```python |
|
>>> print(ds[0]) |
|
>>> { |
|
'spike_counts': ..., |
|
'subject_id': 'sub-Reggie', |
|
'session_id': 'sub-Reggie_ses-20170115T125333_behavior+ecephys', |
|
'segment_id': 'segment_2', |
|
'source_dataset': 'even-chen' |
|
} |
|
``` |
|
where: |
|
* `spike_counts` is a `uint8` array containing the spike count data. Its shape is `(n,t)` where `n` is the number of simultaneously recorded neurons in that session and `t` is the number of time bins (20 ms bins). |
|
* `source_dataset` is an identifier string indicating the source dataset from which that particular row of data came from. |
|
* `subject_id` is an identifier string indicating the subject the data were recorded from. |
|
* `session_id` is an identifier string indicating the recording session. |
|
* `segment_id` is a segment (or chunk) identifier useful for cases where the session was split into smaller chunks (we split long recording sessions (>10M tokens) into smaller equal-sized chunks of no more than 10M tokens), so the whole session can be reproduced from its chunks, if desired. |
|
|
|
The dataset rows are pre-shuffled, so user do not have to re-shuffle them. |