jobs-git ma2za commited on
Commit
caa59fe
·
verified ·
0 Parent(s):

Duplicate from ma2za/many_emotions

Browse files

Co-authored-by: paolo mazza <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - uncompressed
37
+ *.pcm filter=lfs diff=lfs merge=lfs -text
38
+ *.sam filter=lfs diff=lfs merge=lfs -text
39
+ *.raw filter=lfs diff=lfs merge=lfs -text
40
+ # Audio files - compressed
41
+ *.aac filter=lfs diff=lfs merge=lfs -text
42
+ *.flac filter=lfs diff=lfs merge=lfs -text
43
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
44
+ *.ogg filter=lfs diff=lfs merge=lfs -text
45
+ *.wav filter=lfs diff=lfs merge=lfs -text
46
+ # Image files - uncompressed
47
+ *.bmp filter=lfs diff=lfs merge=lfs -text
48
+ *.gif filter=lfs diff=lfs merge=lfs -text
49
+ *.png filter=lfs diff=lfs merge=lfs -text
50
+ *.tiff filter=lfs diff=lfs merge=lfs -text
51
+ # Image files - compressed
52
+ *.jpg filter=lfs diff=lfs merge=lfs -text
53
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license:
3
+ apache-2.0
4
+ task_categories:
5
+ - text-classification
6
+ multilinguality:
7
+ - multilingual
8
+ source_datasets:
9
+ - dair-ai/emotion
10
+ - daily_dialog
11
+ - go_emotions
12
+ language:
13
+ - en
14
+ size_categories:
15
+ - 100K<n<1M
16
+ tags:
17
+ - emotion
18
+ ---
19
+
20
+ # Dataset Card for "many_emotions"
21
+
22
+ ## Dataset Description
23
+
24
+ - **Homepage:**
25
+
26
+ ### Dataset Summary
27
+
28
+ ### Languages
29
+
30
+ [More Information Needed]
31
+
32
+ ## Dataset Structure
33
+
34
+ ### Data Instances
35
+
36
+ [More Information Needed]
37
+
38
+ ### Data Fields
39
+
40
+ The data fields are:
41
+
42
+ - `id`: unique identifier
43
+ - `text`: a `string` feature.
44
+ - `label`: a classification label, with possible values including `anger` (0), `fear` (1), `joy` (2), `love` (
45
+ 3), `sadness` (4), `surprise` (5), `neutral` (6).
46
+ - `license`: inherited license from source dataset
47
+ - `dataset`: source dataset
48
+ - `language`: text language
49
+
50
+ ### Data Splits
51
+
52
+ The dataset has 2 configurations:
53
+
54
+ - raw: with 5 configuration for each language
55
+ - split: with configurations train, validation, test
56
+
57
+ ## Dataset Creation
58
+
59
+ ### Curation Rationale
60
+
61
+ The raw split contains duplicates.
62
+
63
+ In the split "split" there may be equal rows but with different label.
64
+
65
+ ### Source Data
66
+
67
+ #### Initial Data Collection and Normalization
68
+
69
+ [More Information Needed]
70
+
71
+ ## Additional Information
72
+
73
+ ### Licensing Information
74
+
75
+ Each row has its own license which is inherited from the source dataset.
data/many_emotions.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c943d625d317ffef435684d8059936ac571c7094af846ccee7fba3b685d5b9b
3
+ size 128477621
data/split_dataset_test.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a26f085aa3157ffe3f8817d94d64040050072341d6395a12e7b6b31ad5b338f7
3
+ size 7981677
data/split_dataset_train.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b420aba48b9cd72b9363a9794dad3346cd572ae7418b77962b14fd44b725eeb2
3
+ size 143575307
data/split_dataset_validation.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24c46037cc2be059040caf83bfd8fb76110111517857602acdc1caa1d41848c5
3
+ size 7969332
many_emotions.py ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from typing import List
3
+
4
+ import datasets
5
+ from datasets import ClassLabel, Value, load_dataset
6
+
7
+ _LANGUAGES = ["en", "fr", "it", "es", "de"]
8
+
9
+ _SUB_CLASSES = [
10
+ "anger",
11
+ "fear",
12
+ "joy",
13
+ "love",
14
+ "sadness",
15
+ "surprise",
16
+ "neutral",
17
+ ]
18
+
19
+ _CLASS_NAMES = [
20
+ "no emotion",
21
+ "happiness",
22
+ "admiration",
23
+ "amusement",
24
+ "anger",
25
+ "annoyance",
26
+ "approval",
27
+ "caring",
28
+ "confusion",
29
+ "curiosity",
30
+ "desire",
31
+ "disappointment",
32
+ "disapproval",
33
+ "disgust",
34
+ "embarrassment",
35
+ "excitement",
36
+ "fear",
37
+ "gratitude",
38
+ "grief",
39
+ "joy",
40
+ "love",
41
+ "nervousness",
42
+ "optimism",
43
+ "pride",
44
+ "realization",
45
+ "relief",
46
+ "remorse",
47
+ "sadness",
48
+ "surprise",
49
+ "neutral",
50
+ ]
51
+
52
+
53
+ class EmotionsDatasetConfig(datasets.BuilderConfig):
54
+ def __init__(self, features, label_classes, **kwargs):
55
+ super().__init__(**kwargs)
56
+ self.features = features
57
+ self.label_classes = label_classes
58
+
59
+
60
+ class EmotionsDataset(datasets.GeneratorBasedBuilder):
61
+ BUILDER_CONFIGS = [
62
+ EmotionsDatasetConfig(
63
+ name="raw",
64
+ label_classes=_SUB_CLASSES,
65
+ features=["text", "label", "dataset", "license"],
66
+ ),
67
+ EmotionsDatasetConfig(
68
+ name="split",
69
+ label_classes=_SUB_CLASSES,
70
+ features=["text", "label", "dataset", "license", "language"],
71
+ ),
72
+ ]
73
+
74
+ DEFAULT_CONFIG_NAME = "split"
75
+
76
+ def _info(self):
77
+ features = {
78
+ "id": datasets.Value("string"),
79
+ "text": Value(dtype="string", id=None),
80
+ "label": ClassLabel(names=_SUB_CLASSES, id=None),
81
+ "dataset": Value(dtype="string", id=None),
82
+ "license": Value(dtype="string", id=None),
83
+ }
84
+ if self.config.name == "split":
85
+ features.update({"language": ClassLabel(names=_LANGUAGES, id=None)})
86
+ return datasets.DatasetInfo(features=datasets.Features(features))
87
+
88
+ def _split_generators(
89
+ self, dl_manager: datasets.DownloadManager
90
+ ) -> List[datasets.SplitGenerator]:
91
+ splits = []
92
+ if self.config.name == "raw":
93
+ downloaded_files = dl_manager.download_and_extract(
94
+ ["data/many_emotions.json.gz"]
95
+ )
96
+ for lang in _LANGUAGES:
97
+ splits.append(
98
+ datasets.SplitGenerator(
99
+ name=lang,
100
+ gen_kwargs={
101
+ "filepaths": downloaded_files,
102
+ "language": lang,
103
+ "dataset": "raw",
104
+ },
105
+ )
106
+ )
107
+ else:
108
+ for split in ["train", "validation", "test"]:
109
+ downloaded_files = dl_manager.download_and_extract(
110
+ [f"data/split_dataset_{split}.jsonl.gz"]
111
+ )
112
+ splits.append(
113
+ datasets.SplitGenerator(
114
+ name=split,
115
+ gen_kwargs={"filepaths": downloaded_files, "dataset": "split"},
116
+ )
117
+ )
118
+ return splits
119
+
120
+ def _generate_examples(self, filepaths, dataset, license=None, language=None):
121
+ if dataset == "raw":
122
+ for i, filepath in enumerate(filepaths):
123
+ with open(filepath, encoding="utf-8") as f:
124
+ for idx, line in enumerate(f):
125
+ example = json.loads(line)
126
+ if language != "all":
127
+ example = {
128
+ "id": example["id"],
129
+ "text": example[
130
+ "text" if language == "en" else language
131
+ ],
132
+ "label": example["label"],
133
+ "dataset": example["dataset"],
134
+ "license": example["license"],
135
+ }
136
+ label = _CLASS_NAMES[example["label"]]
137
+ if label == "no emotion":
138
+ label = "neutral"
139
+ elif label == "happiness":
140
+ label = "joy"
141
+ example.update({"label": label})
142
+ yield example["id"], example
143
+ else:
144
+ for i, filepath in enumerate(filepaths):
145
+ with open(filepath, encoding="utf-8") as f:
146
+ for idx, line in enumerate(f):
147
+ example = json.loads(line)
148
+ yield example["id"], example
149
+
150
+
151
+ if __name__ == "__main__":
152
+ dataset = load_dataset("ma2za/many_emotions", name="raw")
153
+ print()
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ datasets