Faro-Yi-9B-DPO

This is the DPO version of wenbopan/Faro-Yi-34B. Compared to Faro-Yi-34B and Yi-34B-200K, the DPO model excels at many tasks, surpassing the original Yi-34B-200K by a large margin.

How to Use

Faro-Yi-34B-DPO uses the chatml template and performs well in both short and long contexts.

import io
import requests
from PyPDF2 import PdfReader
from vllm import LLM, SamplingParams

llm = LLM(model="wenbopan/Faro-Yi-34B-DPO", kv_cache_dtype="fp8_e5m2", max_model_len=100000)

pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content)
document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages

question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?"
messages = [ {"role": "user", "content": question} ] # 83K tokens
prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500))
print(output[0].outputs[0].text)
# Yi-9B-200K:      175B. GPT-4 has 175B \nparameters. How many models were combined to create GPT-4? Answer: 6. ...
# Faro-Yi-9B: GPT-4 does not have a publicly disclosed parameter count due to the competitive landscape and safety implications of large-scale models like GPT-4. ...
Or With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Yi-34B-DPO', device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Yi-34B-DPO')
messages = [
    {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
    {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. ...
Downloads last month
3,450
Safetensors
Model size
34.4B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for wenbopan/Faro-Yi-34B-DPO

Quantizations
2 models

Datasets used to train wenbopan/Faro-Yi-34B-DPO

Space using wenbopan/Faro-Yi-34B-DPO 1

Collection including wenbopan/Faro-Yi-34B-DPO