OLMo Code Python2-3 Tagged Model
This is a LoRA adapter fine-tuned on the OLMo-1B model for Python 2 and 3 code generation tasks with language tagging.
Model Details
- Base Model: allenai/OLMo-1B-hf
- Model Type: LoRA Adapter
- Task: Causal Language Modeling for Python 2 and 3 code
- Language: Python 2 and 3 with language tagging
- License: MIT
- Fine-tuned by: dipikakhullar
Model Description
This model is a LoRA adapter that has been fine-tuned on Python 2 and 3 code data with language tagging. It extends the capabilities of the base OLMo-1B model specifically for Python code generation tasks, with the ability to distinguish between Python 2 and Python 3 syntax.
LoRA Configuration
- LoRA Type: LORA
- LoRA Alpha: 16
- LoRA Dropout: 0.05
- LoRA Rank (r): 8
- Target Modules: down_proj, q_proj, v_proj, up_proj, k_proj, gate_proj, o_proj
- Task Type: CAUSAL_LM
Uses
Direct Use
This model is intended for Python 2 and 3 code generation tasks with language tagging. It can be used to:
- Generate Python code completions for both Python 2 and 3
- Assist with code writing in both Python versions
- Provide code suggestions with language awareness
- Handle Python 2 to Python 3 migration tasks
Downstream Use
The model can be further fine-tuned for specific Python programming tasks or integrated into code generation applications that need to handle both Python versions.
Out-of-Scope Use
This model is specifically designed for Python 2 and 3 code generation and may not perform well for:
- Other programming languages
- Natural language tasks
- Non-code related tasks
How to Get Started with the Model
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-hf")
# Load the LoRA adapter
model = PeftModel.from_pretrained(base_model, "dipikakhullar/olmo-code-python2-3-tagged")
# Example usage for Python 3
prompt = "[python3] def fibonacci(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Example usage for Python 2
prompt = "[python2] def fibonacci(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
The model was fine-tuned on cleaned Python 2 and 3 code data with language tagging, specifically prepared for language model training.
Training Procedure
- Base Model: allenai/OLMo-1B-hf
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Checkpoint: checkpoint-2100
Model Card Contact
- Author: dipikakhullar
- Repository: https://huggingface.co/dipikakhullar/olmo-code-python2-3-tagged
Framework versions
- PEFT 0.7.1
- Transformers
- Downloads last month
- 31
Model tree for dipikakhullar/olmo-code-python2-3-tagged
Base model
allenai/OLMo-1B-hf