File size: 3,661 Bytes
8b2b639 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
language: en
license: apache-2.0
---
# LoNAS Model Card: lonas-bloomz-7b-math
The super-network fine-tuned on BLOOMZ-7B with some math reasoning datasets using LoNAS.
## Model Details
### Information
- **Model name:** lonas-bloomz-7b-math
- **Base model:** [BLOOMZ-7b](https://huggingface.co/bigscience/bloomz-7b1)
- **Domain:** Math
- **Subnetwork version:** Super-network
- **NNCF Configuration:** [nncf_lonas_bloomz_7b.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS/nncf_config/unified_math/nncf_lonas_bloomz_7b.json)
### Adapter Configuration
- **LoRA rank:** 32
- **LoRA alpha:** 64
- **LoRA target modules:** query_key_value, dense_h_to_4h, dense_4h_to_h
### Training Hyperparameters
- **Batch size:** 16
- **Learning rate:** 3e-4
- **Epoch:** 8
### Training Data
Unified math reasoning dataset: [math_10k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/math_10k.json) (collected with the training sets of GSM8K, MAWPS, and AQuA).
### Evaluation Data
[GSM8K](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/gsm8k/test.json), [AQuA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/AQuA/test.json), [MAWPS](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/mawps/test.json) and [SVAMP](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/SVAMP/test.json)
## How to use
Refer to [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS#evaluation](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS#evaluation):
```bash
CUDA_VISIBLE_DEVICES=${DEVICES} python run_math.py \
--dataset_path None \
--model_name_or_path bigscience/bloomz-7b1 \
--lora \
--lora_weights lonas-bloomz-7b-math \
--nncf_config nncf_config/unified_math/nncf_lonas_bloomz_7b.json \
--do_test \
--output_dir lonas-bloomz-7b-math/results
```
## Evaluation Results
Results of the heuristic sub-network discoverd from the super-network:
| Method | Total Params. | TFLOPs | GSM8K | AQuA | MAWPS | SVAMP | Average |
|------------|---------------|-----------|-------|------|-------|-------|-----------|
| LoRA | 7.1B | 1.8 | 17.4 | 21.3 | 70.2 | 41.0 | **37.5** |
| **LoNAS** | **6.1B** | **1.5** | 18.6 | 22.0 | 76.5 | 31.8 | 37.2 |
## Model Sources
**Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/LoNAS)
**Paper:**
- [LoNAS: Elastic Low-Rank Adapters for Efficient Large Language Models](https://aclanthology.org/2024.lrec-main.940)
- [Low-Rank Adapters Meet Neural Architecture Search for LLM Compression](https://arxiv.org/abs/2501.16372)
## Citation
```bibtex
@inproceedings{munoz-etal-2024-lonas,
title = "{L}o{NAS}: Elastic Low-Rank Adapters for Efficient Large Language Models",
author = "Munoz, Juan Pablo and
Yuan, Jinjie and
Zheng, Yi and
Jain, Nilesh",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.940",
pages = "10760--10776",
}
```
## License
Apache-2.0
|