license: gpl-3.0
language:
- en
tags:
- emotion-cause-analysis
Emotion-Cause-Generation-in-Friends (ECGF)
We shift the focus of emotion cause analysis from traditional span or utterance extraction to abstractive generation in multimodal conversations. The ECGF dataset builds upon our ECF dataset, containing conversations sourced from the American TV series Friends. The key difference in annotation is that while ECF annotates text spans or utterance indexes as the emotion cause, ECGF provides an abstractive cause that summarizes all the clues from three modalities triggering the given emotion.
For more details, please refer to our GitHub:
- [ACM MM 2024] Observe before Generate: Emotion-Cause aware Video Caption for Multimodal Emotion Cause Generation in Conversations
- [IEEE TAFFC 2024] From Extraction to Generation: Multimodal Emotion-Cause Pair Generation in Conversations
- [IEEE TAFFC 2022] Multimodal Emotion-Cause Pair Extraction in Conversations
Dataset Statistics
Item | Train | Dev | Test | Total |
---|---|---|---|---|
Conversations | 1001 | 112 | 261 | 1,374 |
Utterances | 9,966 | 1,087 | 2,566 | 13,619 |
Emotion (utterances) | 5,577 | 668 | 1,445 | 7,690 |
Emotion annotated with cause | 5,577 | 668 | 1,445 | 7,690 |
Supported Tasks
- Multimodal Emotion Recognition in Conversation (MERC)
- Multimodal Emotion Cause Generation in Conversations (MECGC)
- Multimodal Emotion-Cause Pair Generation in Conversations (MECPG)
- ...
About Multimodal Data
⚠️ Due to potential copyright issues with the TV show "Friends", we cannot provide and share pre-segmented video clips.
If you need to utilize multimodal data, you may consider the following options:
Use the acoustic and visual features we provide:
audio_embedding_6373.npy
: the embedding table composed of the 6373-dimensional acoustic features of each utterances extracted with openSMILEvideo_embedding_4096.npy
: the embedding table composed of the 4096-dimensional visual features of each utterances extracted with 3D-CNN- The specific usage of the features is detailed in the MECPE repository.
- If you need to utilize newer or more advanced features, please feel free to contact us, and we will do our best to assist with their extraction.
You can download the raw video clips from MELD. Since ECF is constructed based on the MELD dataset, most utterances in ECF correspond to those in MELD. The correspondence can be found in the last column of the file all_data_pair_ECFvsMELD.txt. However, we have made certain modifications to MELD's raw data while constructing ECF, including but not limited to editing utterance text, adjusting timestamps, and adding or removing utterances. Therefore, some timestamps provided in ECF have been corrected and may differ from those in MELD. There are also new utterances that cannot be found in MELD. Given this, we recommend option (3) if feasible.
Download the raw videos of Friends from the website, and use the FFmpeg toolkit to extract audio-visual clips of each utterance based on the timestamps we provide in the JSON files.
Citation
If you find ECGF useful for your research, please cite our paper using the following BibTeX entries:
@inproceedings{wang2024obg,
title={Observe before Generate: Emotion-Cause aware Video Caption for Multimodal Emotion Cause Generation in Conversations},
author={Wang, Fanfan and Ma, Heqing and Shen, Xiangqing and Yu, Jianfei and Xia, Rui},
booktitle={Proceedings of the 32st ACM International Conference on Multimedia},
pages = {5820–5828},
year={2024},
doi = {10.1145/3664647.3681601}
}
@ARTICLE{ma2024monica,
author={Ma, Heqing and Yu, Jianfei and Wang, Fanfan and Cao, Hanyu and Xia, Rui},
journal={IEEE Transactions on Affective Computing},
title={From Extraction to Generation: Multimodal Emotion-Cause Pair Generation in Conversations},
year={2024},
volume={},
number={},
pages={1-12},
doi={10.1109/TAFFC.2024.3446646}}