File size: 2,461 Bytes
87350e0
 
995a708
 
 
 
 
 
 
 
 
 
 
87350e0
 
 
 
995a708
87350e0
 
 
 
 
 
 
 
 
995a708
 
 
 
 
87350e0
 
 
 
 
 
 
995a708
87350e0
995a708
87350e0
 
 
995a708
87350e0
995a708
87350e0
995a708
fbe9a85
 
995a708
 
 
87350e0
 
 
 
 
995a708
 
fbe9a85
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
library_name: transformers
license: apache-2.0
datasets:
- indonlp/indonlu
language:
- id
metrics:
- f1
- accuracy
- recall
base_model:
- FacebookAI/xlm-roberta-base
---

# Model Card for Model ID

Sentiment analysis model for Indonesian language. Built from [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) using [indonlp/indonlu](https://huggingface.co/datasets/indonlp/indonlu) dataset.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** [Muhamad Rizky Yanuar](https://arcleife.github.io/portfolio/)
- **Model type:** [RoBERTa](https://huggingface.co/docs/transformers/en/model_doc/roberta)
- **Language(s) (NLP):** [Indonesian]
- **License:** [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Finetuned from model:** [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base)

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[Sentiment analysis dataset on indolu](https://huggingface.co/datasets/indonlp/indonlu) created by indonlp.

### Training

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

Refer [here](https://github.com/arcleife/notebooks/blob/main/sentiment_finetuning.py).

**Training hyperparameters**

- num_train_epochs = 5
- learning_rate = 5e-6
- weight_decay = 1e-1
- per_device_train_batch_size = 16
- per_device_eval_batch_size = 16
- fp16 = True

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

|Epoch|	Training Loss |	Validation Loss | F1       | Recall   | Precision |
|-----|---------------|-----------------|----------|----------|-----------|
|1	  | No log        |	0.283834        | 0.908730 | 0.908730 |	0.908730  |
|2	  |	No log        |	0.248232        | 0.930952 | 0.930952 |	0.930952  |
|3	  |	No log        |	0.282172        | 0.930952 | 0.930952 |	0.930952  |
|4	  |	No log        |	0.257302        | 0.936508 | 0.936508 |	0.936508  |
|5	  |	No log        |	0.271212        | 0.939683 | 0.939683 |	0.939683  |