Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ Part of MONSTER: <https://arxiv.org/abs/2502.15122>.
|
|
22 |
|License|Other|
|
23 |
|Citations|[1] [2] [3]|
|
24 |
|
25 |
-
***Dreamer
|
26 |
|
27 |
The dataset is publicly available [2], and we utilize the Torcheeg toolkit for preprocessing, including signal cropping and low-pass and high-pass filtering [3]. Note that only EEG data is analyzed in this study, with ECG signals excluded. Labels for arousal and valence are binarized, assigning values below 3 to class 1 and values of 3 or higher to class 2, and has been split into cross-validation folds based on participant.
|
28 |
|
|
|
22 |
|License|Other|
|
23 |
|Citations|[1] [2] [3]|
|
24 |
|
25 |
+
***Dreamer*** is a multimodal dataset that includes electroencephalogram (EEG) and electrocardiogram (ECG) signals recorded during affect elicitation using audio-visual stimuli [1], captured with a 14-channel Emotiv EPOC headset at a sampling rate of 128 Hz. It consists of data recording from 23 participants, along with their self-assessments of affective states (valence, arousal, and dominance) after each stimulus. For our classification task, we focus on the arousal and valence labels, referred to as ***DreamerA*** and ***DreamerV*** respectively. The processed datasets both consist of 170,246 multivariate time series each of length 256 (i.e., representing 2 seconds of data per time series at a sampling rate of 128 Hz).
|
26 |
|
27 |
The dataset is publicly available [2], and we utilize the Torcheeg toolkit for preprocessing, including signal cropping and low-pass and high-pass filtering [3]. Note that only EEG data is analyzed in this study, with ECG signals excluded. Labels for arousal and valence are binarized, assigning values below 3 to class 1 and values of 3 or higher to class 2, and has been split into cross-validation folds based on participant.
|
28 |
|