faxnoprinter commited on
Commit
e222d50
·
verified ·
1 Parent(s): 70c824c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -20,7 +20,7 @@ This is a **LoRA adapter** trained on the [GSM8K](https://huggingface.co/dataset
20
  ## Model Details
21
 
22
  - **Base model**: [`apple/OpenELM-450M`](https://huggingface.co/apple/OpenELM-450M)
23
- - **Adapter type**: [LoRA](https://arxiv.org/abs/2106.09685) via [PEFT](https://github.com/huggingface/peft)
24
  - **Trained on**: GSM8K (math word problems)
25
  - **Languages**: English
26
  - **Model size**: ~450M parameters (base); adapter size is small
@@ -32,10 +32,8 @@ This is a **LoRA adapter** trained on the [GSM8K](https://huggingface.co/dataset
32
 
33
  ```python
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
- from peft import PeftModel
36
 
37
  base_model = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M")
38
  tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
39
 
40
- model = PeftModel.from_pretrained(base_model, "your-username/openelm-450m-gsm8k-lora")
41
 
 
20
  ## Model Details
21
 
22
  - **Base model**: [`apple/OpenELM-450M`](https://huggingface.co/apple/OpenELM-450M)
23
+ - **Adapter type**: [LoRA](https://arxiv.org/abs/2106.09685) via [PEFT](https://github.com/huggingface/peft) (float32)
24
  - **Trained on**: GSM8K (math word problems)
25
  - **Languages**: English
26
  - **Model size**: ~450M parameters (base); adapter size is small
 
32
 
33
  ```python
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
35
 
36
  base_model = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M")
37
  tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
38
 
 
39