OpenELM / README.md
qicao-apple's picture
Upload README.md with huggingface_hub
43a6d81 verified
|
raw
history blame
12.7 kB
metadata
license: other
license_name: apple-sample-code-license
license_link: LICENSE

OpenELM

Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari

We introduce OpenELM, a family of Open-source Efficient Language Models. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.

Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens.

See the list below for the details of each model:


from transformers import AutoModelForCausalLM

openelm_270m = AutoModelForCausalLM.from_pretrained("apple/OpenELM-270M", trust_remote_code=True)
openelm_450m = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M", trust_remote_code=True)
openelm_1b = AutoModelForCausalLM.from_pretrained("apple/OpenELM-1_1B", trust_remote_code=True)
openelm_3b = AutoModelForCausalLM.from_pretrained("apple/OpenELM-3B", trust_remote_code=True)

openelm_270m_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-270M-Instruct", trust_remote_code=True)
openelm_450m_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M-Instruct", trust_remote_code=True)
openelm_1b_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-1_1B-Instruct", trust_remote_code=True)
openelm_3b_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-3B-Instruct", trust_remote_code=True)

Usage

Below we provide an example of loading the model via HuggingFace Hub as:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# obtain access to "meta-llama/Llama-2-7b-hf", then see https://huggingface.co/docs/hub/security-tokens to get a token 
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", token="hf_xxxx")

model_path = "apple/OpenELM-450M"

model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
model = model.cuda().eval()
prompt = "Once upon a time there was"
tokenized_prompt = tokenizer(prompt)
prompt_tensor = torch.tensor(tokenized_prompt["input_ids"], device="cuda").unsqueeze(0)
output_ids = model.generate(prompt_tensor, max_new_tokens=256, repetition_penalty=1.2, pad_token_id=0)
output_ids = output_ids[0].tolist()
output_text = tokenizer.decode(output_ids, skip_special_tokens=True)
print(f'{model_path=}, {prompt=}\n')
print(output_text)

# below is the output:
"""
model_path='apple/OpenELM-450M', prompt='Once upon a time there was'

Once upon a time there was a little girl who lived in the woods. She had a big heart and she loved to play with her friends. One day, she decided to go for a walk in the woods. As she walked, she saw a beautiful tree. It was so tall that it looked like a mountain. The tree was covered with leaves and flowers.
The little girl thought that this tree was very pretty. She wanted to climb up to the tree and see what was inside. So, she went up to the tree and climbed up to the top. She was very excited when she saw that the tree was full of beautiful flowers. She also
"""

Main Results

Zero-Shot

Model Size ARC-c ARC-e BoolQ HellaSwag PIQA SciQ WinoGrande Average
OpenELM-270M 26.45 45.08 53.98 46.71 69.75 84.70 53.91 54.37
OpenELM-270M-Instruct 30.55 46.68 48.56 52.07 70.78 84.40 52.72 55.11
OpenELM-450M 27.56 48.06 55.78 53.97 72.31 87.20 58.01 57.56
OpenELM-450M-Instruct 30.38 50.00 60.37 59.34 72.63 88.00 58.96 59.95
OpenELM-1_1B 32.34 55.43 63.58 64.81 75.57 90.60 61.72 63.44
OpenELM-1_1B-Instruct 37.97 52.23 70.00 71.20 75.03 89.30 62.75 65.50
OpenELM-3B 35.58 59.89 67.40 72.44 78.24 92.70 65.51 67.39
OpenELM-3B-Instruct 39.42 61.74 68.17 76.36 79.00 92.50 66.85 69.15

LLM360

Model Size ARC-c HellaSwag MMLU TruthfulQA WinoGrande Average
OpenELM-270M 27.65 47.15 25.72 39.24 53.83 38.72
OpenELM-270M-Instruct 32.51 51.58 26.70 38.72 53.20 40.54
OpenELM-450M 30.20 53.86 26.01 40.18 57.22 41.50
OpenELM-450M-Instruct 33.53 59.31 25.41 40.48 58.33 43.41
OpenELM-1_1B 36.69 65.71 27.05 36.98 63.22 45.93
OpenELM-1_1B-Instruct 41.55 71.83 25.65 45.95 64.72 49.94
OpenELM-3B 42.24 73.28 26.76 34.98 67.25 48.90
OpenELM-3B-Instruct 47.70 76.87 24.80 38.76 67.96 51.22

OpenLLM Leaderboard

Model Size ARC-c CrowS-Pairs HellaSwag MMLU PIQA RACE TruthfulQA WinoGrande Average
OpenELM-270M 27.65 66.79 47.15 25.72 69.75 30.91 39.24 53.83 45.13
OpenELM-270M-Instruct 32.51 66.01 51.58 26.70 70.78 33.78 38.72 53.20 46.66
OpenELM-450M 30.20 68.63 53.86 26.01 72.31 33.11 40.18 57.22 47.69
OpenELM-450M-Instruct 33.53 67.44 59.31 25.41 72.63 36.84 40.48 58.33 49.25
OpenELM-1_1B 36.69 71.74 65.71 27.05 75.57 36.46 36.98 63.22 51.68
OpenELM-1_1B-Instruct 41.55 71.02 71.83 25.65 75.03 39.43 45.95 64.72 54.40
OpenELM-3B 42.24 73.29 73.28 26.76 78.24 38.76 34.98 67.25 54.35
OpenELM-3B-Instruct 47.70 72.33 76.87 24.80 79.00 38.47 38.76 67.96 55.73

See the technical report for more results and comparison.

Evaluation

Setup

Install the following dependencies:


# install public lm-eval-harness

harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..

# 66d6242 is the main branch on 2024-04-01 
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0

Evaluate OpenELM


# OpenELM-270M
hf_model=OpenELM-270M

# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMa tokenizer which requires add_bos_token to be True
add_bos_token=True
batch_size=1

mkdir lm_eval_output

shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=5
task=mmlu,winogrande
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=10
task=hellaswag
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

Bias, Risks, and Limitations

Our OpenELM models are not trained with any safety guarantees, the model outputs can be potentially inaccurate, harmful or contain biased information. produce inaccurate, biased or other objectionable responses to user prompts. Therefore, users and developers should conduct extensive safety testing and filtering suited to their specific needs.