YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
phi-4-finetuned
This is a fine-tuned version of the unsloth/phi-4
model using GRPOTrainer
.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RaghuCourage9605/phi-4-fine-tuned")
tokenizer = AutoTokenizer.from_pretrained("RaghuCourage9605/phi-4-fine-tuned")
# Example inference
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 50
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.