Transformers
Safetensors
English
text-generation-inference
unsloth
llama
trl

Uploaded model

dataset = load_dataset("NobodyExistsOnTheInternet/toxicqa", split="train")
dataset2 = load_dataset("Nitral-AI/Discover-Instruct-6k-Distilled-R1-70b-ShareGPT-Think-Tags", split="train")
dataset3 = load_dataset("Nitral-Archive/RP_Alignment-ShareGPT", split="train")
  • Developed by: bunnycore
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora

Merges
2 models

Datasets used to train bunnycore/Llama-3.2-3b-RP-Toxic-R1-lora