File size: 1,145 Bytes
532b969
 
 
 
 
 
 
 
 
 
1a74094
 
a6d7e0f
 
 
 
 
 
 
 
 
7c6a4aa
 
 
 
 
a6d7e0f
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
language: 
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---


### Chinese ELECTRA

Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.

This project is based on the official code of ELECTRA: https://github.com/google-research/electra

You may also interested in,

Chinese MacBERT: https://github.com/ymcui/MacBERT  
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm  
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA  
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet  
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer  

More resources by HFL: https://github.com/ymcui/HFL-Anthology