5.png

Lambda-Cancri-1.5B-Exp

Lambda-Cancri-1.5B-Exp is a general-purpose LLM fine-tuned from Qwen2.5-1.5B using optimized supervised fine-tuning (SFT). This model is designed to enhance coding proficiency, reasoning ability, and step-by-step explanation across multiple software development and general knowledge tasks.

Key Features

  1. Code Reasoning & Explanation
    Trained to analyze, generate, and explain code with a focus on logic, structure, and clarity. Supports functional, object-oriented, and procedural paradigms.

  2. Optimized Supervised Fine-Tuning (SFT)
    Fine-tuned using high-quality supervised datasets, ensuring strong performance in tasks like code generation, bug fixing, function completion, and abstract reasoning.

  3. Multi-Language & General Task Support
    Works fluently with Python, JavaScript, C++, Shell, and general knowledge tasks — ideal for programming, scripting, algorithmic problem-solving, and general-purpose reasoning.

  4. Compact and Efficient
    At just 1.5B parameters, it's lightweight enough for edge deployments, developer tools, and general-purpose assistants, while maintaining strong reasoning and coding capabilities.

  5. Debugging and Auto-Fix Capabilities
    Built to identify bugs, recommend corrections, and provide context-aware explanations of issues across a wide range of codebases.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Lambda-Cancri-1.5B-Exp"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Write a Python function that checks if a number is prime, and explain how it works."

messages = [
    {"role": "system", "content": "You are a helpful coding and reasoning assistant. Your job is to write correct code and explain the logic step-by-step."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

  • Code Assistance & IDE Integration:
    Smart autocomplete, bug detection, function suggestion, and explanations for developers.

  • Learning & Education:
    Perfect for students, educators, and self-learners in programming and technical fields.

  • Automated Code Review & QA:
    Assists in logic analysis, structure evaluation, and bug spotting in code for quality assurance.

  • General Purpose Assistant:
    Provides help beyond coding—answering general queries, solving reasoning tasks, and assisting in workflows.

  • Edge & DevTool Deployments:
    Lightweight for browser extensions, desktop applications, and CLI-based assistants.

Limitations

  1. Scaling Challenges
    May not handle extremely large or highly complex projects as well as larger models.

  2. Creativity Variability
    May show inconsistent performance on highly creative or unconventional coding tasks.

  3. Security Considerations
    Outputs should be audited to ensure the generation of secure, safe code.

  4. Instruction Sensitivity
    Responds better with clear, structured prompts and task instructions.

Downloads last month
17
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Lambda-Cancri-1.5B-Exp

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(675)
this model
Quantizations
3 models

Collection including prithivMLmods/Lambda-Cancri-1.5B-Exp