|
--- |
|
license: apache-2.0 |
|
base_model: apple/OpenELM-450M |
|
library_name: peft |
|
tags: |
|
- lora |
|
- openelm |
|
- gsm8k |
|
- math |
|
- adapter |
|
- transformers |
|
- peft |
|
--- |
|
|
|
# 🧸 OpenELM-450M LoRA Adapter — Fine-Tuned on GSM8K |
|
|
|
This is a **LoRA adapter** trained on the [GSM8K](https://huggingface.co/datasets/gsm8k) dataset using [Apple's OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) base model. |
|
--- |
|
|
|
## Model Details |
|
|
|
- **Base model**: [`apple/OpenELM-450M`](https://huggingface.co/apple/OpenELM-450M) |
|
- **Adapter type**: [LoRA](https://arxiv.org/abs/2106.09685) via [PEFT](https://github.com/huggingface/peft) (float32) |
|
- **Trained on**: GSM8K (math word problems) |
|
- **Languages**: English |
|
- **License**: Apache 2.0 |
|
|
|
--- |
|
|
|
## How to Use |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
base_model = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M") |
|
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M") |
|
|
|
|
|
|