devngho's picture
Update README.md
1fc6c8a verified
metadata
base_model:
  - lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3
datasets:
  - devngho/ko_llm_annotations
language:
  - ko
library_name: transformers
license: mit
metrics:
  - f1

devngho/ko_edu_classifier_v2_lemon-mint_LaBSE-EnKo-Nano-Preview-v0.3

์ด ๋ชจ๋ธ์€ lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3์— classifier๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. HuggingFaceFW/fineweb-edu-classifier์˜ ํ•œ๊ตญ์–ด ๋ฒ„์ „์„ ๋ชฉํ‘œ๋กœ, ํ•œ๊ตญ์–ด ์›น ํŽ˜์ด์ง€์˜ ๊ต์œก์„ฑ ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์—๋Š” blueapple8259/c4-ko-cleaned-2์—์„œ ์ถ”์ถœํ•œ 500k ์ƒ˜ํ”Œ์„ Qwen/Qwen2.5-32B-Instruct๋กœ ํ‰๊ฐ€ํ•œ devngho/ko_llm_annotations ๋ฐ์ดํ„ฐ์…‹์ด ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

์ด ์—ฐ๊ตฌ๋Š” Google์˜ TPU Research Cloud (TRC)์˜ Cloud TPU ์ œ๊ณต์œผ๋กœ ์ˆ˜ํ–‰๋˜์—ˆ์Šต๋‹ˆ๋‹ค. โšก

์ƒ์„ธ

ํ•™์Šต ์ƒ์„ธ

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 512
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 2h 56m

ํ•™์Šต ์žฅ๋น„

TPU v4-8

์„ฑ๋Šฅ

Validation Report:
              precision    recall  f1-score   support

           0       0.55      0.23      0.32       198
           1       0.68      0.48      0.57      1553
           2       0.37      0.69      0.49      1159
           3       0.56      0.41      0.47       967
           4       0.53      0.12      0.20       219

    accuracy                           0.49      4096
   macro avg       0.54      0.39      0.41      4096
weighted avg       0.55      0.49      0.49      4096

Confusion Matrix:
[[ 45 118  35   0   0]
 [ 34 752 728  39   0]
 [  3 201 803 147   5]
 [  0  31 521 396  19]
 [  0   1  61 130  27]]

ํ•œ๊ตญ์–ด ์ž„๋ฒ ๋”ฉ์˜ ํ•œ๊ณ„์™€ qwen2.5 32b ๋ชจ๋ธ์˜ ํ‰๊ฐ€ ํ•œ๊ณ„๋กœ ์„ฑ๋Šฅ์ด ๋‚ฎ์€ ๊ฒƒ์œผ๋กœ ๋ณด์ž…๋‹ˆ๋‹ค. 3 ์ด์ƒ๊ณผ ๋ฏธ๋งŒ์œผ๋กœ ๊ตฌ๋ถ„ํ•  ๋•Œ f1 score๋Š” ์•ฝ 0.59์ž…๋‹ˆ๋‹ค.

devngho/ko_edu_classifier_v2_lemon-mint_LaBSE-EnKo-Nano-Preview-v0.3

This model is lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3 with classfier head. It is designed to evaluate the educational value of Korean web pages, similar to the HuggingFaceFW/fineweb-edu-classifier, but focused on Korean content. The training data comes from devngho/ko_llm_annotations dataset, contains 500k samples extracted from blueapple8259/c4-ko-cleaned-2 and evaluated using Qwen/Qwen2.5-32B-Instruct.

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).โšก

Training detail

  • learning_rate: 3e-4 (cosine)
  • warmup_ratio: 0.1
  • batch_size: 512
  • optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
  • duration: 2h 56m

Training hardware

TPU v4-8

Performance

Validation Report:
              precision    recall  f1-score   support

           0       0.55      0.23      0.32       198
           1       0.68      0.48      0.57      1553
           2       0.37      0.69      0.49      1159
           3       0.56      0.41      0.47       967
           4       0.53      0.12      0.20       219

    accuracy                           0.49      4096
   macro avg       0.54      0.39      0.41      4096
weighted avg       0.55      0.49      0.49      4096

Confusion Matrix:
[[ 45 118  35   0   0]
 [ 34 752 728  39   0]
 [  3 201 803 147   5]
 [  0  31 521 396  19]
 [  0   1  61 130  27]]

The low performance is likely due to the limitations of Korean embeddings and the evaluation limitations of the Qwen2.5 32B model. The F1 score is about 0.59 when separating above and below 3.