File size: 3,081 Bytes
9fe92a2
 
 
 
67309be
 
 
 
 
9fe92a2
 
67309be
 
 
 
 
 
6c267b0
67309be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9fe92a2
 
67309be
9fe92a2
67309be
9fe92a2
 
 
67309be
 
 
9fe92a2
 
 
67309be
 
 
 
 
 
 
 
 
9fe92a2
 
 
67309be
 
 
 
 
9fe92a2
 
 
 
67309be
 
 
 
 
 
9fe92a2
 
 
 
 
 
67309be
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: apache-2.0
base_model: google/mt5-large
tags:
- thai
- grammatical-error-correction
- mt5
- fine-tuned
- l2-learners
- generated_from_keras_callback
model-index:
- name: pakawadeep/ctfl-gec-th
  results:
    - task:
        name: Grammatical Error Correction
        type: text2text-generation
      dataset:
        name: CTFL-GEC
        type: custom
      metrics:
        - name: Precision
          type: precision
          value: 0.47
        - name: Recall
          type: recall
          value: 0.47
        - name: F1
          type: f1
          value: 0.47
        - name: F0.5
          type: f0.5
          value: 0.47
        - name: BLEU
          type: bleu
          value: 0.69
        - name: GLEU
          type: gleu
          value: 0.68
        - name: CHRF
          type: chrf
          value: 0.87
language:
- th
---

# pakawadeep/ctfl-gec-th

This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large), trained for **Grammatical Error Correction (GEC)** in **Thai** for **L2 learners**. It was developed as part of the research *"Grammatical Error Correction for L2 Learners of Thai Using Large Language Models"*, and represents the best-performing model in the study.

## Model description

This model is based on the mT5-large architecture and was fine-tuned on the CTFL-GEC dataset, which contains human-annotated grammatical error corrections from L2 Thai learners. To improve generalization, the dataset was augmented using the Self-Instruct method with 200% additional synthetic pairs.

The model is capable of correcting sentence-level grammatical errors typical of L2 Thai writing, including issues with word order, omissions, and incorrect particles.

## Intended uses & limitations

### Intended uses
- Grammatical error correction for Thai language learners
- Linguistic analysis of L2 learner errors
- Research in low-resource GEC methods

### Limitations
- May not generalize to informal or dialectal Thai
- Performance may degrade on sentence types or domains not represented in the training data
- Designed for Thai GEC only; not optimized for multilingual correction tasks

## Training and evaluation data

The model was fine-tuned on a combined dataset consisting of:
- **CTFL-GEC**: A manually annotated corpus of Thai learner writing (370 writing samples, 4,200+ sentences)
- **Self-Instruct augmentation (200%)**: Synthetic GEC pairs generated using LLM prompting

Evaluation was conducted on a held-out portion of the human-annotated dataset using common GEC metrics.

## Training procedure

### Training hyperparameters
- **Optimizer**: AdamWeightDecay
- **Learning rate**: 2e-5
- **Beta1/Beta2**: 0.9 / 0.999
- **Epsilon**: 1e-7
- **Weight decay**: 0.01
- **Precision**: float32

### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1

## Citation

If you use this model, please cite the associated thesis:

```
Pakawadee P. Chookwan, "Grammatical Error Correction for L2 Learners of Thai Using Large Language Models", 2025.
```