Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
ffwang commited on
Commit
eb2e39e
·
verified ·
1 Parent(s): cdca9fc

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -4,4 +4,76 @@ language:
4
  - en
5
  tags:
6
  - emotion-cause-analysis
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  tags:
6
  - emotion-cause-analysis
7
+ ---
8
+
9
+ # Emotion-Cause-Generation-in-Friends (ECGF)
10
+
11
+ We shift the focus of emotion cause analysis from traditional span or utterance extraction to abstractive generation in multimodal conversations.
12
+ The **ECGF** dataset builds upon our [ECF](https://huggingface.co/datasets/NUSTM/ECF) dataset, containing conversations sourced from the American TV series *Friends*. The key difference in annotation is that while ECF annotates text spans or utterance indexes as the emotion cause, ECGF provides an abstractive cause that summarizes all the clues from three modalities triggering the given emotion.
13
+
14
+ For more details, please refer to our GitHub:
15
+
16
+ - [ACM MM 2024] [Observe before Generate: Emotion-Cause aware Video Caption for Multimodal Emotion Cause Generation in Conversations](https://github.com/NUSTM/MECGC)
17
+ - [IEEE TAFFC 2024] [From Extraction to Generation: Multimodal Emotion-Cause Pair Generation in Conversations](https://github.com/NUSTM/MECPG)
18
+ - [IEEE TAFFC 2022] [Multimodal Emotion-Cause Pair Extraction in Conversations](https://github.com/NUSTM/MECPE)
19
+
20
+
21
+ ## Dataset Statistics
22
+
23
+ | Item | Train | Dev | Test | Total |
24
+ | ------------------------------- | ----- | ----- | ----- | ------ |
25
+ | Conversations | 1001 | 112 | 261 | 1,374 |
26
+ | Utterances | 9,966 | 1,087 | 2,566 | 13,619 |
27
+ | Emotion (utterances) | 5,577 | 668 | 1,445 | 7,690 |
28
+ | Emotion annotated with cause | 5,577 | 668 | 1,445 | 7,690 |
29
+
30
+ ## Supported Tasks
31
+
32
+ - Multimodal Emotion Recognition in Conversation (MERC)
33
+ - Multimodal Emotion Cause Generation in Conversations (MECGC)
34
+ - Multimodal Emotion-Cause Pair Generation in Conversations (MECPG)
35
+ - ...
36
+
37
+ ## About Multimodal Data   
38
+
39
+ ⚠️ Due to potential copyright issues with the TV show "Friends", we cannot provide and share pre-segmented video clips.
40
+
41
+ If you need to utilize multimodal data, you may consider the following options:
42
+
43
+ 1. Use the acoustic and visual features we provide:
44
+ - [`audio_embedding_6373.npy`](https://drive.google.com/file/d/1EhU2jFSr_Vi67Wdu1ARJozrTJtgiQrQI/view?usp=share_link): the embedding table composed of the 6373-dimensional acoustic features of each utterances extracted with openSMILE
45
+ - [`video_embedding_4096.npy`](https://drive.google.com/file/d/1NGSsiQYDTqgen_g9qndSuha29JA60x14/view?usp=share_link): the embedding table composed of the 4096-dimensional visual features of each utterances extracted with 3D-CNN
46
+ - The specific usage of the features is detailed in the [MECPE](https://github.com/NUSTM/MECPE) repository.
47
+ - If you need to utilize newer or more advanced features, please feel free to contact us, and we will do our best to assist with their extraction.
48
+
49
+ 2. You can download the raw video clips from [MELD](https://github.com/declare-lab/MELD). Since ECF is constructed based on the MELD dataset, most utterances in ECF correspond to those in MELD. The correspondence can be found in the last column of the file [all_data_pair_ECFvsMELD.txt](https://github.com/NUSTM/MECPE/blob/main/data/all_data_pair_ECFvsMELD.txt). However, **we have made certain modifications to MELD's raw data while constructing ECF, including but not limited to editing utterance text, adjusting timestamps, and adding or removing utterances**. Therefore, some timestamps provided in ECF have been corrected and may differ from those in MELD. There are also new utterances that cannot be found in MELD. Given this, we recommend option (3) if feasible.
50
+
51
+ 3. Download the raw videos of _Friends_ from the website, and use the FFmpeg toolkit to extract audio-visual clips of each utterance based on the **timestamps** we provide in the JSON files.
52
+
53
+
54
+
55
+ ## Citation
56
+
57
+ If you find ECF useful for your research, please cite our paper using the following BibTeX entries:
58
+
59
+ ```
60
+ @inproceedings{wang2024obg,
61
+ title={Observe before Generate: Emotion-Cause aware Video Caption for Multimodal Emotion Cause Generation in Conversations},
62
+ author={Wang, Fanfan and Ma, Heqing and Shen, Xiangqing and Yu, Jianfei and Xia, Rui},
63
+ booktitle={Proceedings of the 32st ACM International Conference on Multimedia},
64
+ pages = {5820–5828},
65
+ year={2024},
66
+ doi = {10.1145/3664647.3681601}
67
+ }
68
+
69
+
70
+ @ARTICLE{ma2024monica,
71
+ author={Ma, Heqing and Yu, Jianfei and Wang, Fanfan and Cao, Hanyu and Xia, Rui},
72
+ journal={IEEE Transactions on Affective Computing},
73
+ title={From Extraction to Generation: Multimodal Emotion-Cause Pair Generation in Conversations},
74
+ year={2024},
75
+ volume={},
76
+ number={},
77
+ pages={1-12},
78
+ doi={10.1109/TAFFC.2024.3446646}}
79
+ ```