metadata
base_model:
- saishshinde15/TBH.AI_Base_Reasoning
tags:
- text-generation
- transformers
- causal-lm
- reasoning
- sft
- gguf
license: apache-2.0
language:
- en
TBH.AI Vortex Reasoning (GGUF)
- Model Name:
saishshinde15/TBH.AI_Vortex_Reasoning_GGUF
- Developed by: TBH.AI
- License: Apache 2.0
- Fine-tuned from: TBH.AI_Base_Reasoning
- Available in: 16-bit and 4-bit GGUF formats
Overview
TethysAI Vortex Reasoning is an experimental model designed to replicate the advanced reasoning abilities of TBH.AI_Base_Reasoning, which was originally enhanced using GRPO. Instead of GRPO, this model was fine-tuned with high-quality structured data using high-end Supervised Fine-Tuning (SFT) to replicate the step-by-step thinking and self-questioning mechanisms seen in models like DeepSeek-R1.
This model has been optimized for efficient inference in GGUF format, allowing for deployment on CPU-based systems and lightweight edge devices without sacrificing reasoning capabilities.
Why This Model Stands Out
🔹 Advanced Self-Reasoning:
- The model questions itself internally before arriving at an answer.
- Similar to DeepSeek-R1, it follows a structured reasoning process.
- Uses and tokens internally, though they may not always be explicitly visible in responses.
🔹 No GRPO, Only High-End SFT:
- Instead of GRPO, the model learns structured reasoning directly from fine-tuned data.
- Demonstrates logical breakdowns, multi-step problem-solving, and contextual understanding.
- Achieves results comparable to the base model without reinforcement learning.
🔹 Optimized for GGUF Inference:
- Available in both 16-bit and 4-bit GGUF, enabling fast and memory-efficient execution on CPUs.
- Ideal for on-device deployment, including edge computing, embedded AI, and AI assistants.
Usage
Use the below prompt for the best results:
You are an advanced AI assistant. Provide answers in a clear, step-by-step manner.