Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ For the task named Multimodal Emotion-Cause Pair Extraction in Conversation, we
|
|
12 |
|
13 |
For more details, please refer to our GitHub:
|
14 |
|
15 |
-
- [Multimodal Emotion-Cause Pair Extraction in Conversations](https://github.com/NUSTM/MECPE)
|
16 |
- [SemEval-2024 Task 3](https://github.com/NUSTM/SemEval-2024_ECAC)
|
17 |
|
18 |
## Dataset Statistics
|
@@ -24,6 +24,23 @@ For more details, please refer to our GitHub:
|
|
24 |
| Emotion (utterances) | 5,577 | 668 | 1,445 | 7,690 |
|
25 |
| Emotion-cause (utterance) pairs | 7,055 | 866 | 1,873 | 9,794 |
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
## Citation
|
28 |
|
29 |
If you find ECF useful for your research, please cite our paper using the following BibTeX entries:
|
|
|
12 |
|
13 |
For more details, please refer to our GitHub:
|
14 |
|
15 |
+
- [Multimodal Emotion-Cause Pair Extraction in Conversations](https://github.com/NUSTM/MECPE/tree/main/data)
|
16 |
- [SemEval-2024 Task 3](https://github.com/NUSTM/SemEval-2024_ECAC)
|
17 |
|
18 |
## Dataset Statistics
|
|
|
24 |
| Emotion (utterances) | 5,577 | 668 | 1,445 | 7,690 |
|
25 |
| Emotion-cause (utterance) pairs | 7,055 | 866 | 1,873 | 9,794 |
|
26 |
|
27 |
+
## About Multimodal Data
|
28 |
+
|
29 |
+
⚠️ Due to potential copyright issues with the TV show "Friends", we do not provide pre-segmented video clips.
|
30 |
+
|
31 |
+
If you need to utilize multimodal data, you may consider the following options:
|
32 |
+
|
33 |
+
1. Use the acoustic and visual features we provide:
|
34 |
+
- [`audio_embedding_6373.npy`](https://drive.google.com/file/d/1EhU2jFSr_Vi67Wdu1ARJozrTJtgiQrQI/view?usp=share_link): the embedding table composed of the 6373-dimensional acoustic features of each utterances extracted with openSMILE
|
35 |
+
- [`video_embedding_4096.npy`](https://drive.google.com/file/d/1NGSsiQYDTqgen_g9qndSuha29JA60x14/view?usp=share_link): the embedding table composed of the 4096-dimensional visual features of each utterances extracted with 3D-CNN
|
36 |
+
|
37 |
+
2. Since ECF is constructed based on the MELD dataset, you can download the raw video clips from [MELD](https://github.com/declare-lab/MELD).
|
38 |
+
Most utterances in ECF align with MELD. However, **we have made certain modifications to MELD's raw data while constructing ECF, including but not limited to editing utterance text, adjusting timestamps, and adding or removing utterances**. Therefore, some timestamps provided in ECF have been corrected, and there are also new utterances that cannot be found in MELD. Given this, we recommend option (3) if feasible.
|
39 |
+
|
40 |
+
3. Download the raw videos of _Friends_ from the website, and use the FFmpeg toolkit to extract audio-visual clips of each utterance based on the timestamps we provide.
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
## Citation
|
45 |
|
46 |
If you find ECF useful for your research, please cite our paper using the following BibTeX entries:
|