demerzel-iv
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ language:
|
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
|
9 |
-
# Model Card for sparsing-law-0.
|
10 |
|
11 |
- **Paper:** [paper](https://arxiv.org/pdf/2411.02335)
|
12 |
- **Repository containing relevant codes:** [github](https://github.com/thunlp/SparsingLaw)
|
@@ -14,7 +14,7 @@ pipeline_tag: text-generation
|
|
14 |
### Introduction
|
15 |
|
16 |
The model is one of the key checkpoints used for most analyses in the paper *Sparsing Law: Towards Large Language Models with Greater Activation Sparsity*.
|
17 |
-
It is ReLU-activated and contains approximately 0.
|
18 |
|
19 |
The model was trained from scratch using the pre-training dataset described in our paper, with the WSD (Warmup-Stable-Decay) learning rate scheduler.
|
20 |
Note that it is a base model derived from the last checkpoint of the stable pre-training stage, which has not undergone the decay or SFT stage.
|
|
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
|
9 |
+
# Model Card for sparsing-law-0.8b-relu
|
10 |
|
11 |
- **Paper:** [paper](https://arxiv.org/pdf/2411.02335)
|
12 |
- **Repository containing relevant codes:** [github](https://github.com/thunlp/SparsingLaw)
|
|
|
14 |
### Introduction
|
15 |
|
16 |
The model is one of the key checkpoints used for most analyses in the paper *Sparsing Law: Towards Large Language Models with Greater Activation Sparsity*.
|
17 |
+
It is ReLU-activated and contains approximately 0.8 billion non-embedding parameters.
|
18 |
|
19 |
The model was trained from scratch using the pre-training dataset described in our paper, with the WSD (Warmup-Stable-Decay) learning rate scheduler.
|
20 |
Note that it is a base model derived from the last checkpoint of the stable pre-training stage, which has not undergone the decay or SFT stage.
|