P.png

Pictor-1338-QwenP-1.5B

Pictor-1338-QwenP-1.5B is a code reasoning LLM fine-tuned from Qwen-1.5B using distributed reinforcement learning (RL). This model is designed to enhance coding proficiency, debugging accuracy, and step-by-step reasoning in software development tasks across multiple programming languages.

Key Features

  1. Code Reasoning & Explanation
    Trained to analyze, generate, and explain code with a focus on logic, structure, and clarity. Supports functional, object-oriented, and procedural paradigms.

  2. Reinforcement Learning Fine-Tuning
    Enhanced using distributed RL, improving reward-aligned behavior in tasks like fixing bugs, completing functions, and understanding abstract instructions.

  3. Multi-Language Support
    Works fluently with Python, JavaScript, C++, and Shell, among others—ideal for general-purpose programming, scripting, and algorithmic tasks.

  4. Compact and Efficient
    At just 1.5B parameters, it's lightweight enough for edge deployments and developer tools with strong reasoning capability.

  5. Debugging and Auto-Fix Capabilities
    Built to identify bugs, recommend corrections, and provide context-aware explanations of issues in codebases.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Pictor-1338-QwenP-1.5B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Write a Python function that checks if a number is prime, and explain how it works."

messages = [
    {"role": "system", "content": "You are a code reasoning assistant. Your job is to write correct code and explain the logic step-by-step."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

  • Code Assistance & IDE Integration:
    Smart autocomplete, bug detection, and function suggestion for developers.

  • Learning & Explanation:
    Ideal for students and educators in programming courses or interactive coding tutorials.

  • Automated Code Review & QA:
    Analyzes logic, structure, and potential bugs in code for quality assurance.

  • Edge & DevTool Deployments:
    Lightweight enough for browser extensions, local developer tools, and CLI-based assistants.

Limitations

  1. Scaling Challenges
    May not handle large, complex codebases as well as larger models.

  2. Inconsistent Creativity
    May vary in performance for creative or unconventional coding tasks.

  3. Security Considerations
    Outputs should be audited to avoid insecure or vulnerable code patterns.

  4. Prompt Design Sensitivity
    Better output with clear instructions, function definitions, or examples.

Downloads last month
27
Safetensors
Model size
1.78B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Pictor-1338-QwenP-1.5B

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(643)
this model
Quantizations
4 models

Collection including prithivMLmods/Pictor-1338-QwenP-1.5B