Procyon-1.5B-Qwen2-Theorem
Procyon-1.5B-Qwen2-Theorem is an experimental theorem explanation model fine-tuned on Qwen2-1.5B. Specially crafted for mathematical theorem understanding, structured concept breakdowns, and non-reasoning based explanation tasks, it targets domains where clarity and formal structure take precedence over freeform reasoning.
GGUF: https://huggingface.co/prithivMLmods/Procyon-1.5B-Qwen2-Theorem-GGUF
Key Features
Mathematical Theorem Explanation Designed to deliver structured, formal, and accessible explanations of theorems across pure and applied mathematics, including areas such as algebra, calculus, topology, and number theory.
Concept Breakdown without Deep Reasoning Focuses on clarity over inference, offering non-reasoning-based breakdowns suitable for educational tools, step-by-step formal writing, and documentation-heavy workflows.
Concise and Interpretable Output Outputs content that aligns with pedagogical clarity: definitions, hypotheses, conclusions, and related implications—all in clean, human-readable structure.
Multi-Format Support Capable of generating content in formats such as LaTeX, Markdown, JSON (structured concept trees), and plain text, suitable for academic publishing and automated knowledge bases.
Lightweight and Efficient With a 1.5B parameter footprint, it is ideal for deployment on edge devices, local academic tools, and integrated learning platforms, offering quick responses without heavy compute demands.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Procyon-1.5B-Qwen2-Theorem"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the Fundamental Theorem of Calculus in simple terms with hypotheses and conclusion."
messages = [
{"role": "system", "content": "You are an assistant skilled at explaining mathematical theorems in a structured and simple format."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Theorem explanation and educational enrichment
- Math-aware structured content generation
- LaTeX and Markdown generation for academic writing
- Technical teaching tools and tutoring support
- Early-stage research on symbolic language learning
Limitations
- Not designed for deep reasoning or proof synthesis
- May underperform in conversational, general-purpose tasks
- Best suited for deterministic, formulaic, and structured outputs
- Performance on non-mathematical or abstract logical tasks may be limited
- Downloads last month
- 10