T5-Small with LoRA on OpenCodeReasoning
This is a LoRA fine-tuned version of T5-small on a subset of NVIDIA's OpenCodeReasoning dataset using PEFT. Improved version to be uploaded soon.
Loss Curve
Step | Train Loss | Val Loss |
---|---|---|
50 | 8.63 | 8.17 |
100 | 6.04 | 5.35 |
150 | 5.31 | 4.90 |
200 | 5.19 | 4.71 |
250 | 4.94 | 4.59 |
300 | 4.95 | 4.51 |
350 | 4.79 | 4.46 |
400 | 4.89 | 4.42 |
450 | 4.69 | 4.40 |
Final Train Loss: 4.69 Final Eval Loss: 4.40
Notes
Trained on subset of OpenCodeReasoning due to Colab memory limits
Use PeftModel with t5-small base
Metrics used: Loss (BLEU skipped due to output structure)
License
Apache 2.0
Example Usage
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from peft import PeftModel, PeftConfig
config = PeftConfig.from_pretrained("ShahzebKhoso/t5-small-opencode-lora")
base_model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, "ShahzebKhoso/t5-small-opencode-lora")
tokenizer = AutoTokenizer.from_pretrained("ShahzebKhoso/t5-small-opencode-lora")
inputs = tokenizer("generate code: write a function to reverse a string", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
'''
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for ShahzebKhoso/t5-small-opencode-lora
Base model
google-t5/t5-small