Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -26,32 +26,14 @@ Chinese language model trained on pretrain dataset.
|
|
26 |
|
27 |
## Usage
|
28 |
```python
|
29 |
-
import
|
30 |
-
from model.model import Transformer
|
31 |
-
from model.LMConfig import LMConfig
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
checkpoint = torch.load('pretrain_512.pth')
|
36 |
-
model.load_state_dict(checkpoint['model'])
|
37 |
-
```
|
38 |
-
|
39 |
-
## Model Description
|
40 |
-
This is a lightweight Chinese language model trained on a 4.33GB pretrain dataset. The model uses a standard transformer architecture optimized for Chinese text processing.
|
41 |
-
|
42 |
-
## Intended Uses
|
43 |
-
- Chinese text generation
|
44 |
-
- Language modeling
|
45 |
-
- Text completion
|
46 |
-
- Educational purposes
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
- Trained on A6000 GPU
|
55 |
-
- Learning rate: 2e-4
|
56 |
-
- Batch size: 128
|
57 |
-
- Training epochs: 20
|
|
|
26 |
|
27 |
## Usage
|
28 |
```python
|
29 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
|
30 |
|
31 |
+
model = AutoModelForCausalLM.from_pretrained("samz/minimind-pretrain")
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained("samz/minimind-pretrain")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
+
text = "今天天气真不错"
|
35 |
+
inputs = tokenizer(text, return_tensors="pt")
|
36 |
+
outputs = model.generate(**inputs, max_length=50)
|
37 |
+
result = tokenizer.decode(outputs[0])
|
38 |
+
print(result)
|
39 |
+
```
|
|
|
|
|
|
|
|