TethysAI Vortex Reasoning (GGUF)

  • Model Name: saishshinde15/TethysAI_Vortex_Reasoning_GGUF
  • Developed by: TethysAI
  • License: Apache 2.0
  • Fine-tuned from: TethysAI_Base_Reasoning
  • Available in: 16-bit and 4-bit GGUF formats

Overview

TethysAI Vortex Reasoning is an experimental model designed to replicate the advanced reasoning abilities of TethysAI_Base_Reasoning, which was originally enhanced using GRPO. Instead of GRPO, this model was fine-tuned with high-quality structured data using high-end Supervised Fine-Tuning (SFT) to replicate the step-by-step thinking and self-questioning mechanisms seen in models like DeepSeek-R1.

This model has been optimized for efficient inference in GGUF format, allowing for deployment on CPU-based systems and lightweight edge devices without sacrificing reasoning capabilities.


Why This Model Stands Out

๐Ÿ”น Advanced Self-Reasoning:

  • The model questions itself internally before arriving at an answer.
  • Similar to DeepSeek-R1, it follows a structured reasoning process.
  • Uses and tokens internally, though they may not always be explicitly visible in responses.

๐Ÿ”น No GRPO, Only High-End SFT:

  • Instead of GRPO, the model learns structured reasoning directly from fine-tuned data.
  • Demonstrates logical breakdowns, multi-step problem-solving, and contextual understanding.
  • Achieves results comparable to the base model without reinforcement learning.

๐Ÿ”น Optimized for GGUF Inference:

  • Available in both 16-bit and 4-bit GGUF, enabling fast and memory-efficient execution on CPUs.
  • Ideal for on-device deployment, including edge computing, embedded AI, and AI assistants.

Usage

Use the below prompt for the best results:

You are an advanced AI assistant. Provide answers in a clear, step-by-step manner.
Downloads last month
125
GGUF
Model size
3.09B params
Architecture
qwen2

4-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for saishshinde15/TethysAI_Vortex_Reasoning_GGUF

Base model

Qwen/Qwen2.5-3B
Quantized
(3)
this model

Collection including saishshinde15/TethysAI_Vortex_Reasoning_GGUF