Trained Models ποΈ
Collection
They may be small, but they're training like giants!
β’
8 items
β’
Updated
β’
17
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
penalty_alpha: 0.5
top_k: 4
repetition_penalty: 1.01
from transformers import pipeline
generate = pipeline("text-generation", "Felladrin/Llama-160M-Chat-v1")
messages = [
{
"role": "system",
"content": "You are a helpful assistant who answers user's questions with details and curiosity.",
},
{
"role": "user",
"content": "What are some potential applications for quantum computing?",
},
]
prompt = generate.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
output = generate(
prompt,
max_new_tokens=1024,
penalty_alpha=0.5,
top_k=4,
repetition_penalty=1.01,
)
print(output[0]["generated_text"])
Metric | Value |
---|---|
Avg. | 30.27 |
AI2 Reasoning Challenge (25-Shot) | 24.74 |
HellaSwag (10-Shot) | 35.29 |
MMLU (5-Shot) | 26.13 |
TruthfulQA (0-shot) | 44.16 |
Winogrande (5-shot) | 51.30 |
GSM8k (5-shot) | 0.00 |
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 4.10 |
IFEval (0-Shot) | 15.75 |
BBH (3-Shot) | 3.17 |
MATH Lvl 5 (4-Shot) | 0.00 |
GPQA (0-shot) | 1.01 |
MuSR (0-shot) | 3.17 |
MMLU-PRO (5-shot) | 1.51 |