File size: 2,417 Bytes
57811a9
 
 
 
 
 
0ddd8d4
 
 
 
 
 
57811a9
 
 
 
0ddd8d4
 
 
 
 
 
3890dd6
57811a9
 
 
 
 
0ddd8d4
 
3890dd6
 
57811a9
3f60879
 
 
23db925
d2d81d5
 
4834dbc
d2d81d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f1ac0a1
d2d81d5
 
 
 
 
4834dbc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
dataset_info:
  features:
  - name: spike_counts
    sequence:
      sequence: uint8
  - name: subject_id
    dtype: string
  - name: session_id
    dtype: string
  - name: segment_id
    dtype: string
  - name: source_dataset
    dtype: string
  splits:
  - name: train
    num_bytes: 33983349435.45733
    num_examples: 4141
  - name: test
    num_bytes: 344675362.5426727
    num_examples: 42
  download_size: 5954621801
  dataset_size: 34328024798
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
tags:
- v1.0
---

# The Neural Pile (primate)

This dataset contains 34.3 billion tokens of curated spiking neural activity data recorded from primates. 
The code and detailed instructions for creating this dataset from scratch can be found at [this GitHub repository](https://github.com/eminorhan/neural-pile-primate).
The dataset takes up about 34 GB on disk when stored as memory-mapped `.arrow` files (which is the format used by the local caching system of the Hugging Face 
`datasets` library). The dataset comes with separate `train` and `test` splits. You can load, *e.g.*, the `train` split of the dataset as follows:
```python
ds = load_dataset("eminorhan/neural-pile-primate", num_proc=32, split='train')
```
and display the first data row:
```python
>>> print(ds[0])
>>> {
'spike_counts': ...,
'subject_id': 'sub-Reggie',
'session_id': 'sub-Reggie_ses-20170115T125333_behavior+ecephys',
'segment_id': 'segment_2',
'source_dataset': 'even-chen'
}
```
where:
* `spike_counts` is a `uint8` array containing the spike count data. Its shape is `(n,t)` where `n` is the number of simultaneously recorded neurons in that session and `t` is the number of time bins (20 ms bins).
* `source_dataset` is an identifier string indicating the source dataset from which that particular row of data came from.
* `subject_id` is an identifier string indicating the subject the data were recorded from.
* `session_id` is an identifier string indicating the recording session.
* `segment_id` is a segment (or chunk) identifier useful for cases where the session was split into smaller chunks (we split long recording sessions (>10M tokens) into smaller equal-sized chunks of no more than 10M tokens), so the whole session can be reproduced from its chunks, if desired.

The dataset rows are pre-shuffled, so user do not have to re-shuffle them.