mmuratardag
commited on
Commit
·
118b9b4
1
Parent(s):
eaeef66
fixed the typos
Browse files
README.md
CHANGED
@@ -36,21 +36,27 @@ This model is a fine-tuned RoBERTa-based classifier designed to predict the pres
|
|
36 |
The model can be directly used for classifying text into the following moral foundations:
|
37 |
|
38 |
**Care**: Care/harm for others, protecting them from harm.
|
|
|
39 |
**Fairness**: Justice, treating others equally.
|
|
|
40 |
**Loyalty**: Group loyalty, patriotism, self-sacrifice for the group.
|
|
|
41 |
**Authority**: Respect for tradition and legitimate authority.
|
|
|
42 |
**Sanctity**: Disgust, avoiding dangerous diseases and contaminants.
|
43 |
|
44 |
Each foundation is represented as a virtue (positive expression) and a vice (negative expression).
|
45 |
|
46 |
-
|
47 |
|
48 |
### Downstream Use
|
49 |
|
50 |
Potential downstream uses include:
|
51 |
|
52 |
**Content analysis**: Analyzing the moral framing of news articles, social media posts, or other types of text.
|
|
|
53 |
**Opinion mining**: Understanding the moral values underlying people's opinions and arguments.
|
|
|
54 |
**Ethical assessment**: Evaluating the ethical implications of decisions, policies, or products.
|
55 |
|
56 |
### Out-of-Scope Use
|
@@ -116,12 +122,12 @@ The trainng data is a subset of >60M sentences.
|
|
116 |
|
117 |
The model was fine-tuned using the HuggingFace Transformers library with the following hyperparameters:
|
118 |
|
119 |
-
Num_train_epochs: 10
|
120 |
-
Per_device_train_batch_size: 8
|
121 |
-
Per_device_eval_batch_size: 8
|
122 |
-
Learning rate: 3e-5
|
123 |
-
Optimizer: AdamW
|
124 |
-
Loss function: Binary Cross Entropy with Logits Loss
|
125 |
|
126 |
## Evaluation
|
127 |
|
@@ -147,61 +153,61 @@ The model achieves high overall performance, with variations across different mo
|
|
147 |
|
148 |
***Per-class metrics:***
|
149 |
|
150 |
-
care_virtue:
|
151 |
accuracy: 0.9954
|
152 |
precision: 0.9779
|
153 |
recall: 0.9758
|
154 |
f1: 0.9769
|
155 |
|
156 |
-
care_vice:
|
157 |
accuracy: 0.9960
|
158 |
precision: 0.9734
|
159 |
recall: 0.9506
|
160 |
f1: 0.9619
|
161 |
|
162 |
-
fairness_virtue:
|
163 |
accuracy: 0.9974
|
164 |
precision: 0.9786
|
165 |
recall: 0.9645
|
166 |
f1: 0.9715
|
167 |
|
168 |
-
fairness_vice:
|
169 |
accuracy: 0.9970
|
170 |
precision: 0.9319
|
171 |
recall: 0.8574
|
172 |
f1: 0.8931
|
173 |
|
174 |
-
loyalty_virtue:
|
175 |
accuracy: 0.9945
|
176 |
precision: 0.9811
|
177 |
recall: 0.9780
|
178 |
f1: 0.9795
|
179 |
|
180 |
-
loyalty_vice:
|
181 |
accuracy: 0.9972
|
182 |
precision: 1.0000
|
183 |
recall: 0.0531
|
184 |
f1: 0.1008
|
185 |
|
186 |
-
authority_virtue:
|
187 |
accuracy: 0.9914
|
188 |
precision: 0.9621
|
189 |
recall: 0.9683
|
190 |
f1: 0.9652
|
191 |
|
192 |
-
authority_vice:
|
193 |
accuracy: 0.9963
|
194 |
precision: 0.9848
|
195 |
recall: 0.5838
|
196 |
f1: 0.7331
|
197 |
|
198 |
-
sanctity_virtue:
|
199 |
accuracy: 0.9963
|
200 |
precision: 0.9640
|
201 |
recall: 0.9458
|
202 |
f1: 0.9548
|
203 |
|
204 |
-
sanctity_vice:
|
205 |
accuracy: 0.9958
|
206 |
precision: 0.9538
|
207 |
recall: 0.8530
|
@@ -245,7 +251,7 @@ via my personal website. thx
|
|
245 |
|
246 |
## Citation
|
247 |
|
248 |
-
***
|
249 |
|
250 |
Ardag, M.M. (2024) Moral Foundations Classifier. HuggingFace. https://doi.org/10.57967/hf/2774
|
251 |
|
|
|
36 |
The model can be directly used for classifying text into the following moral foundations:
|
37 |
|
38 |
**Care**: Care/harm for others, protecting them from harm.
|
39 |
+
|
40 |
**Fairness**: Justice, treating others equally.
|
41 |
+
|
42 |
**Loyalty**: Group loyalty, patriotism, self-sacrifice for the group.
|
43 |
+
|
44 |
**Authority**: Respect for tradition and legitimate authority.
|
45 |
+
|
46 |
**Sanctity**: Disgust, avoiding dangerous diseases and contaminants.
|
47 |
|
48 |
Each foundation is represented as a virtue (positive expression) and a vice (negative expression).
|
49 |
|
50 |
+
It's particularly useful for researchers, policymakers, and analysts interested in understanding moral reasoning and rhetoric in different contexts.
|
51 |
|
52 |
### Downstream Use
|
53 |
|
54 |
Potential downstream uses include:
|
55 |
|
56 |
**Content analysis**: Analyzing the moral framing of news articles, social media posts, or other types of text.
|
57 |
+
|
58 |
**Opinion mining**: Understanding the moral values underlying people's opinions and arguments.
|
59 |
+
|
60 |
**Ethical assessment**: Evaluating the ethical implications of decisions, policies, or products.
|
61 |
|
62 |
### Out-of-Scope Use
|
|
|
122 |
|
123 |
The model was fine-tuned using the HuggingFace Transformers library with the following hyperparameters:
|
124 |
|
125 |
+
* Num_train_epochs: 10
|
126 |
+
* Per_device_train_batch_size: 8
|
127 |
+
* Per_device_eval_batch_size: 8
|
128 |
+
* Learning rate: 3e-5
|
129 |
+
* Optimizer: AdamW
|
130 |
+
* Loss function: Binary Cross Entropy with Logits Loss
|
131 |
|
132 |
## Evaluation
|
133 |
|
|
|
153 |
|
154 |
***Per-class metrics:***
|
155 |
|
156 |
+
* care_virtue:
|
157 |
accuracy: 0.9954
|
158 |
precision: 0.9779
|
159 |
recall: 0.9758
|
160 |
f1: 0.9769
|
161 |
|
162 |
+
* care_vice:
|
163 |
accuracy: 0.9960
|
164 |
precision: 0.9734
|
165 |
recall: 0.9506
|
166 |
f1: 0.9619
|
167 |
|
168 |
+
* fairness_virtue:
|
169 |
accuracy: 0.9974
|
170 |
precision: 0.9786
|
171 |
recall: 0.9645
|
172 |
f1: 0.9715
|
173 |
|
174 |
+
* fairness_vice:
|
175 |
accuracy: 0.9970
|
176 |
precision: 0.9319
|
177 |
recall: 0.8574
|
178 |
f1: 0.8931
|
179 |
|
180 |
+
* loyalty_virtue:
|
181 |
accuracy: 0.9945
|
182 |
precision: 0.9811
|
183 |
recall: 0.9780
|
184 |
f1: 0.9795
|
185 |
|
186 |
+
* loyalty_vice:
|
187 |
accuracy: 0.9972
|
188 |
precision: 1.0000
|
189 |
recall: 0.0531
|
190 |
f1: 0.1008
|
191 |
|
192 |
+
* authority_virtue:
|
193 |
accuracy: 0.9914
|
194 |
precision: 0.9621
|
195 |
recall: 0.9683
|
196 |
f1: 0.9652
|
197 |
|
198 |
+
* authority_vice:
|
199 |
accuracy: 0.9963
|
200 |
precision: 0.9848
|
201 |
recall: 0.5838
|
202 |
f1: 0.7331
|
203 |
|
204 |
+
* sanctity_virtue:
|
205 |
accuracy: 0.9963
|
206 |
precision: 0.9640
|
207 |
recall: 0.9458
|
208 |
f1: 0.9548
|
209 |
|
210 |
+
* sanctity_vice:
|
211 |
accuracy: 0.9958
|
212 |
precision: 0.9538
|
213 |
recall: 0.8530
|
|
|
251 |
|
252 |
## Citation
|
253 |
|
254 |
+
***If you use this model in your research or applications, please cite it as follows:***
|
255 |
|
256 |
Ardag, M.M. (2024) Moral Foundations Classifier. HuggingFace. https://doi.org/10.57967/hf/2774
|
257 |
|