Model Card for

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct. It has been trained using TRL.

Quick start

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load merged model (LoRA already merged)
model_id = "textdetox/Llama-pairwise-content-evaluator_merged"

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

model.eval()

# Prompt with example + test pair
messages = [
    {
        "role": "user",
        "content": '''You are given with two texts. Your task is to compare the meaning of them and decide whether they mean the same or not.

Options:
- yes (both texts have the same meaning)
- no (there is significant difference in meaning between the two texts)

Examples:
text1: "Why do you behave like an asshole?"
text2: "Why do you act so bad?"
Answer: yes

text1: "This political party is a gang of idiots"
text2: "They just don’t know their stuff"
Answer: no

text1: "There is nothing dangerous here. People like Abraham Desta who try to create chaos are just opponents of the old system."
text2: "Nothing dangerous is happening. People like Abraham Desta who try to stir things up are just enemies of the previous regime."
Answer:'''
    }
]

# Apply chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Tokenize
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate
with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=5, temperature=0.15)
    result = tokenizer.decode(
        outputs[0][inputs["input_ids"].shape[1]:],
        skip_special_tokens=True
    )

print("Model prediction:", result.strip())

Training framework versions

  • TRL: 0.16.0
  • Transformers: 4.50.1
  • Pytorch: 2.5.1
  • Datasets: 3.4.1
  • Tokenizers: 0.21.1

Citations

Downloads last month
12
Safetensors
Model size
8.03B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for textdetox/Llama-pairwise-content-evaluator

Finetuned
(1669)
this model

Dataset used to train textdetox/Llama-pairwise-content-evaluator