Lambda-Cancri-1.5B-Exp
Lambda-Cancri-1.5B-Exp is a general-purpose LLM fine-tuned from Qwen2.5-1.5B using optimized supervised fine-tuning (SFT). This model is designed to enhance coding proficiency, reasoning ability, and step-by-step explanation across multiple software development and general knowledge tasks.
Key Features
Code Reasoning & Explanation
Trained to analyze, generate, and explain code with a focus on logic, structure, and clarity. Supports functional, object-oriented, and procedural paradigms.Optimized Supervised Fine-Tuning (SFT)
Fine-tuned using high-quality supervised datasets, ensuring strong performance in tasks like code generation, bug fixing, function completion, and abstract reasoning.Multi-Language & General Task Support
Works fluently with Python, JavaScript, C++, Shell, and general knowledge tasks — ideal for programming, scripting, algorithmic problem-solving, and general-purpose reasoning.Compact and Efficient
At just 1.5B parameters, it's lightweight enough for edge deployments, developer tools, and general-purpose assistants, while maintaining strong reasoning and coding capabilities.Debugging and Auto-Fix Capabilities
Built to identify bugs, recommend corrections, and provide context-aware explanations of issues across a wide range of codebases.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Lambda-Cancri-1.5B-Exp"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function that checks if a number is prime, and explain how it works."
messages = [
{"role": "system", "content": "You are a helpful coding and reasoning assistant. Your job is to write correct code and explain the logic step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
Code Assistance & IDE Integration:
Smart autocomplete, bug detection, function suggestion, and explanations for developers.Learning & Education:
Perfect for students, educators, and self-learners in programming and technical fields.Automated Code Review & QA:
Assists in logic analysis, structure evaluation, and bug spotting in code for quality assurance.General Purpose Assistant:
Provides help beyond coding—answering general queries, solving reasoning tasks, and assisting in workflows.Edge & DevTool Deployments:
Lightweight for browser extensions, desktop applications, and CLI-based assistants.
Limitations
Scaling Challenges
May not handle extremely large or highly complex projects as well as larger models.Creativity Variability
May show inconsistent performance on highly creative or unconventional coding tasks.Security Considerations
Outputs should be audited to ensure the generation of secure, safe code.Instruction Sensitivity
Responds better with clear, structured prompts and task instructions.
- Downloads last month
- 17