PoPilot - Fine-tuned Qwen2.5-Coder-14B
This model is a fine-tuned version of Qwen/Qwen2.5-Coder-14B with LoRA adapters merged.
Model Details
- Base Model: Qwen/Qwen2.5-Coder-14B
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training: Supervised Fine-Tuning (SFT)
- Merged: Full model weights (LoRA merged with base)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Justin6657/PoPilot",
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"Justin6657/PoPilot",
trust_remote_code=True
)
# Example usage
prompt = "Write a Python function to calculate fibonacci numbers:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Details
This model was fine-tuned using LoRA adapters and then merged back into the full model weights.
Original LoRA checkpoint path: /net/projects/CLS/DSI_clinic/justin/checkpoint/augmented_train_Qwen2.5-Coder-14B_full-model_repair-synth_repair-simple-phase4
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support