--- base_model: - lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3 datasets: - devngho/ko_llm_annotations language: - ko library_name: transformers license: mit metrics: - f1 --- # devngho/ko_edu_classifier_v2_lemon-mint_LaBSE-EnKo-Nano-Preview-v0.3 이 모델은 [lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3](https://huggingface.co/lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3)에 classifier를 추가한 모델입니다. [HuggingFaceFW/fineweb-edu-classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier)의 한국어 버전을 목표로, 한국어 웹 페이지의 교육성 점수를 평가합니다. 학습에는 [blueapple8259/c4-ko-cleaned-2](https://huggingface.co/datasets/blueapple8259/c4-ko-cleaned-2)에서 추출한 500k 샘플을 [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)로 평가한 [devngho/ko_llm_annotations](https://huggingface.co/datasets/devngho/ko_llm_annotations) 데이터셋이 사용되었습니다. 이 연구는 Google의 TPU Research Cloud [(TRC)](https://sites.research.google/trc/about/)의 Cloud TPU 제공으로 수행되었습니다. ⚡ ## 상세 - **제작:** devngho - **언어:** ko - **라이선스:** mit - **기반 모델:** [lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3](https://huggingface.co/lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3) ## 학습 상세 - learning_rate: 3e-4 (cosine) - warmup_ratio: 0.1 - batch_size: 512 - optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01) - duration: 2h 56m ## 학습 장비 TPU v4-8 ## 성능 ``` Validation Report: precision recall f1-score support 0 0.55 0.23 0.32 198 1 0.68 0.48 0.57 1553 2 0.37 0.69 0.49 1159 3 0.56 0.41 0.47 967 4 0.53 0.12 0.20 219 accuracy 0.49 4096 macro avg 0.54 0.39 0.41 4096 weighted avg 0.55 0.49 0.49 4096 Confusion Matrix: [[ 45 118 35 0 0] [ 34 752 728 39 0] [ 3 201 803 147 5] [ 0 31 521 396 19] [ 0 1 61 130 27]] ``` 한국어 임베딩의 한계와 qwen2.5 32b 모델의 평가 한계로 성능이 낮은 것으로 보입니다. 3 이상과 미만으로 구분할 때 f1 score는 약 0.59입니다. # devngho/ko_edu_classifier_v2_lemon-mint_LaBSE-EnKo-Nano-Preview-v0.3 This model is [lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3](https://huggingface.co/lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3) with classfier head. It is designed to evaluate the educational value of Korean web pages, similar to the [HuggingFaceFW/fineweb-edu-classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier), but focused on Korean content. The training data comes from [devngho/ko_llm_annotations](https://huggingface.co/datasets/devngho/ko_llm_annotations) dataset, contains 500k samples extracted from [blueapple8259/c4-ko-cleaned-2](https://huggingface.co/datasets/blueapple8259/c4-ko-cleaned-2) and evaluated using [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). This research was supported with Cloud TPUs from Google's TPU Research Cloud [(TRC)](https://sites.research.google/trc/about/).⚡ - **Developed by:** devngho - **Language(s):** ko - **License:** mit - **Base model:** [lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3](https://huggingface.co/lemon-mint/LaBSE-EnKo-Nano-Preview-v0.3) ## Training detail - learning_rate: 3e-4 (cosine) - warmup_ratio: 0.1 - batch_size: 512 - optimizer: adamw(b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01) - duration: 2h 56m ## Training hardware TPU v4-8 ## Performance ``` Validation Report: precision recall f1-score support 0 0.55 0.23 0.32 198 1 0.68 0.48 0.57 1553 2 0.37 0.69 0.49 1159 3 0.56 0.41 0.47 967 4 0.53 0.12 0.20 219 accuracy 0.49 4096 macro avg 0.54 0.39 0.41 4096 weighted avg 0.55 0.49 0.49 4096 Confusion Matrix: [[ 45 118 35 0 0] [ 34 752 728 39 0] [ 3 201 803 147 5] [ 0 31 521 396 19] [ 0 1 61 130 27]] ``` The low performance is likely due to the limitations of Korean embeddings and the evaluation limitations of the Qwen2.5 32B model. The F1 score is about 0.59 when separating above and below 3.