Vexoo TrailBlazer-1B - Enhanced Reasoning
Vexoo TrailBlazer-1B is a 1B parameter language model fine-tuned specifically for mathematical, logical, and structured reasoning tasks. Built on Llama-3.2-1B, this model incorporates custom reasoning adapters and extensive fine-tuning on problem-solving datasets.
Try the Model
Use the inference widget above to test the model with reasoning problems!
Model Details
- Parameter Count: 1 billion parameters
- Training Methodology:
- Custom cascading reasoning adapters in critical transformer layers
- Capabilities:
- Step-by-step mathematical problem solving
- Logical deduction and inference
- Structured reasoning with clear explanations
- Self-verification of answers
Recommended System Prompt
You are an advanced reasoning assistant that excels at solving complex problems. Follow these guidelines:
1. Break down problems into clear, logical steps
2. Consider multiple approaches when appropriate
3. Identify key information and relevant concepts
4. Provide clear explanations for each step in your reasoning
5. Verify your conclusions with examples or counterexamples
Usage
# IMPORTANT: Run this in a fresh runtime or after restarting your runtime
# Import unsloth first before anything else to avoid circular imports
import unsloth
import torch
# Then import specific modules
from unsloth import FastLanguageModel
from unsloth.chat_templates import get_chat_template
import time
# Your HuggingFace repository name
REPO_NAME = "vexoolabs/Vexoo-TrailBlazer-1B"
print(f"Testing model from HuggingFace: {REPO_NAME}")
# System prompt
SYSTEM_PROMPT = """You are an advanced reasoning assistant that excels at solving complex problems. Follow these guidelines:
1. Break down problems into clear, logical steps
2. Consider multiple approaches when appropriate
3. Identify key information and relevant concepts
4. Provide clear explanations for each step in your reasoning
5. Verify your conclusions with examples or counterexamples"""
# Load model with Unsloth
print("Loading model...")
use_bf16 = torch.cuda.is_bf16_supported() if torch.cuda.is_available() else False
dtype = torch.bfloat16 if use_bf16 else torch.float16
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=REPO_NAME,
max_seq_length=2048,
dtype=dtype
)
# Configure tokenizer
tokenizer.pad_token = tokenizer.eos_token
tokenizer = get_chat_template(tokenizer, chat_template="llama-3.1")
# Prepare for inference
FastLanguageModel.for_inference(model)
print("โ
Model loaded successfully!")
# Test with sample questions
test_questions = [
"If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?",
"A store sells shoes at $60 per pair and socks at $8 per pair. If I buy 2 pairs of shoes and 3 pairs of socks, what is my total bill?",
"Tell me an interesting fact about the universe!",
"Explain quantum computing in simple terms"
]
for i, question in enumerate(test_questions):
print(f"\n\nTesting question {i+1}: {question}")
# Create messages
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": question}
]
# Apply chat template
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate response with timing
start_time = time.time()
with torch.no_grad():
outputs = model.generate(
inputs,
max_new_tokens=700,
temperature=0.7,
top_p=0.92,
repetition_penalty=1.05,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
end_time = time.time()
# Decode response
response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
response_time = end_time - start_time
print(f"\nResponse (generated in {response_time:.2f} seconds):")
print("-" * 80)
print(response)
print("-" * 80)
print("\nโ
Model test completed! Your model is working correctly on HuggingFace.")
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support