## Korean Grammatical Error Correction Model maintainer: [Soyoung Yoon](https://soyoung97.github.io/profile/) Official repository: [link](https://github.com/soyoung97/GEC-Korean) Dataset request form: [link](https://forms.gle/kF9pvJbLGvnh8ZnQ6) Demo: [link](https://huggingface.co/spaces/Soyoung97/gec-korean-demo) Colab demo: [link](https://colab.research.google.com/drive/1CL__3CpkhBzxWUbvsQmPTQWWu1cWmJHa?usp=sharing) ### Sample code ``` import torch from transformers import PreTrainedTokenizerFast from transformers import BartForConditionalGeneration tokenizer = PreTrainedTokenizerFast.from_pretrained('Soyoung97/gec_kr') model = BartForConditionalGeneration.from_pretrained('Soyoung97/gec_kr') text = '한국어는어렵다.' raw_input_ids = tokenizer.encode(text) input_ids = [tokenizer.bos_token_id] + raw_input_ids + [tokenizer.eos_token_id] corrected_ids = model.generate(torch.tensor([input_ids]), max_length=128, eos_token_id=1, num_beams=4, early_stopping=True, repetition_penalty=2.0) output_text = tokenizer.decode(corrected_ids.squeeze().tolist(), skip_special_tokens=True) output_text >>> '한국어는 어렵다.' ``` Special thanks to the [KoBART-summarization repository](https://huggingface.co/gogamza/kobart-summarization) (referenced from it)