Model Card for manjaveca/emilyrae140922
This repository contains a PEFT adapter that fine-tunes the
NousResearch/Hermes-3-Llama-3.2-3B
large language model to emulate the “Emily Rae” persona.
Model Details
Model Description
A lightweight adapter that, when applied to the base Llama-3.2-3B model,
steers its responses to match the style, tone, and content strategy
defined for the “Emily Rae” conversational agent.
- Developed by: Manja Veca
- Model type: Causal language model (PEFT-adapter)
- Language(s): English
- License: Apache-2.0
- Finetuned from: NousResearch/Hermes-3-Llama-3.2-3B
Uses
Direct Use
Load the adapter on top of the base model to get “Emily Rae” behavior:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("manjaveca/emilyrae140922")
base = AutoModelForCausalLM.from_pretrained(
"NousResearch/Hermes-3-Llama-3.2-3B",
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(base, "manjaveca/emilyrae140922", torch_dtype=torch.float16)
2
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for manjaveca/emilyrae140922
Base model
NousResearch/Hermes-3-Llama-3.2-3B