Text Generation
Transformers
Safetensors
English
mamba
text-generation-inference
t1101675 nielsr HF Staff commited on
Commit
91db146
·
verified ·
1 Parent(s): 2784e1c

Improve MiniPLM-Mamba-130M model card (#1)

Browse files

- Improve MiniPLM-Mamba-130M model card (a111d62001f6ec263b850a99873c4cb151de3e9a)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -1,11 +1,11 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
  datasets:
5
  - monology/pile-uncopyrighted
6
  - MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
7
  language:
8
  - en
 
 
9
  metrics:
10
  - accuracy
11
  pipeline_tag: text-generation
@@ -15,10 +15,12 @@ pipeline_tag: text-generation
15
 
16
  [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
- **MiniPLM-Mamba-130M** is a 130M model with the [Mamba achitecture](https://github.com/state-spaces/mamba) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
- This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.
 
 
20
 
21
- We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.
22
 
23
  <p align='left'>
24
  <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/2BqT0NgkmIXYlktovw9kG.png" width="1000">
@@ -26,7 +28,7 @@ We also open-source the [pre-training corpus](https://huggingface.co/datasets/Mi
26
 
27
  ## Evaluation
28
 
29
- MiniPLM models achieves better performance given the same computation and scales well across model sizes:
30
 
31
  <p align='left'>
32
  <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/EOYzajQcwQFT5PobqL3j0.png" width="1000">
 
1
  ---
 
 
2
  datasets:
3
  - monology/pile-uncopyrighted
4
  - MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
5
  language:
6
  - en
7
+ library_name: transformers
8
+ license: apache-2.0
9
  metrics:
10
  - accuracy
11
  pipeline_tag: text-generation
 
15
 
16
  [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
+ **MiniPLM-Mamba-130M** is a 130M parameter language model with the [Mamba architecture](https://github.com/state-spaces/mamba) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework. It uses the [official Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
+ This model demonstrates the flexibility of the MiniPLM framework in conducting knowledge distillation across model families. The [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM is open-sourced for reproducibility.
20
+
21
+ ## MiniPLM: Knowledge Distillation for Pre-Training Language Models
22
 
23
+ Knowledge distillation (KD) is widely used to train small, high-performing student language models (LMs) using large teacher LMs. While effective in fine-tuning, KD during pre-training faces challenges in efficiency, flexibility, and effectiveness. Existing methods either incur high computational costs due to online teacher inference, require tokenization matching between teacher and student LMs, or risk losing the difficulty and diversity of the teacher-generated training data. To address these issues, MiniPLM is proposed, a KD framework for pre-training LMs by refining the training data distribution with the teacher's knowledge. For efficiency, MiniPLM performs offline teacher LM inference, allowing KD for multiple student LMs without adding training-time costs. For flexibility, MiniPLM operates solely on the training corpus, enabling KD across model families. For effectiveness, MiniPLM leverages the differences between large and small LMs to enhance the difficulty and diversity of the training data, helping student LMs acquire versatile and sophisticated knowledge.
24
 
25
  <p align='left'>
26
  <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/2BqT0NgkmIXYlktovw9kG.png" width="1000">
 
28
 
29
  ## Evaluation
30
 
31
+ MiniPLM models achieve better performance given the same computation and scale well across model sizes:
32
 
33
  <p align='left'>
34
  <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/EOYzajQcwQFT5PobqL3j0.png" width="1000">