English
jinjieyuan commited on
Commit
ba7e48f
·
1 Parent(s): c99e67a

Create README.md

Browse files

Signed-off-by: jinjieyuan <[email protected]>

Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ ---
5
+
6
+ # LoNAS Model Card: lonas-bert-base-glue
7
+
8
+ The super-networks fine-tuned on BERT-base with [GLUE benchmark](https://gluebenchmark.com/) using LoNAS.
9
+
10
+ ## Model Details
11
+
12
+ ### Information
13
+
14
+ - **Model name:** lonas-bert-base-glue
15
+ - **Base model:** [bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
16
+ - **Subnetwork version:** Super-network
17
+ - **NNCF Configurations:** [nncf_config/glue](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/nncf_config/glue)
18
+
19
+ ### Adapter Configuration
20
+
21
+ - **LoRA rank:** 8
22
+ - **LoRA alpha:** 16
23
+ - **LoRA target modules:** query, value
24
+
25
+
26
+ ### Training and Evaluation
27
+
28
+ [GLUE benchmark](https://gluebenchmark.com/)
29
+
30
+ ### Training Hyperparameters
31
+
32
+ | Task | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | QQP | MNLI |
33
+ |------------|------|------|-------|------|-------|------|------|------|
34
+ | Epoch | 80 | 35 | 60 | 80 | 60 | 80 | 60 | 40 |
35
+ | Batch size | 32 | 32 | 64 | 64 | 64 | 64 | 64 | 64 |
36
+ | Learning rate | 3e-4 | 5e-4 | 5e-4 | 3e-4 | 3e-4 | 4e-4 | 3e-4 | 4e-4 |
37
+ | Max length | 128 | 128 | 128 | 128 | 128 | 256 | 128 | 128 |
38
+
39
+ ## How to use
40
+
41
+ Refer to [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/running_commands](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/running_commands):
42
+ ```bash
43
+ CUDA_VISIBLE_DEVICES=${DEVICES} python run_glue.py \
44
+ --task_name ${TASK} \
45
+ --model_name_or_path bert-base-uncased \
46
+ --do_eval \
47
+ --do_search \
48
+ --per_device_eval_batch_size 64 \
49
+ --max_seq_length ${MAX_LENGTH} \
50
+ --lora \
51
+ --lora_weights lonas-bert-base-glue/lonas-bert-base-${TASK} \
52
+ --nncf_config nncf_config/glue/nncf_lonas_bert_base_${TASK}.json \
53
+ --do_test \
54
+ --output_dir lonas-bert-base-glue/lonas-bert-base-${TASK}/results
55
+ ```
56
+
57
+ ## Evaluation Results
58
+
59
+ Results of the optimal sub-network discoverd from the super-network:
60
+
61
+ | Method | Trainable Parameter Ratio | GFLOPs | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | QQP | MNLI | AVG |
62
+ |-------------|---------------------------|------------|-------|-------|-------|-------|-------|-------|-------|-------|-----------|
63
+ | LoRA | 0.27% | 11.2 | 65.85 | 84.46 | 88.73 | 57.58 | 92.06 | 90.62 | 89.41 | 83.00 | 81.46 |
64
+ | **LoNAS** | 0.27% | **8.0** | 70.76 | 88.97 | 88.28 | 61.12 | 93.23 | 91.21 | 88.55 | 82.00 | **83.02** |
65
+
66
+
67
+ ## Model Sources
68
+
69
+ - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS)
70
+ - **Paper:** [LoNAS: Elastic Low-Rank Adapters for Efficient Large Language Models]()
71
+
72
+ ## Citation
73
+
74
+ ```bibtex
75
+ @article{munoz2024lonas,
76
+ title = {LoNAS: Elastic Low-Rank Adapters for Efficient Large Language Models},
77
+ author={J. Pablo Munoz and Jinjie Yuan and Yi Zheng and Nilesh Jain},
78
+ journal={},
79
+ year={2024}
80
+ }
81
+ ```
82
+
83
+ ## License
84
+
85
+ Apache-2.0