💨🦅 Vikhr-Qwen-2.5-1.5B-Instruct

RU

Инструктивная модель на основе Qwen-2.5-1.5B-Instruct, обученная на русскоязычном датасете GrandMaster-PRO-MAX. Создана для высокоэффективной обработки текстов на русском и английском языках, обеспечивая точные ответы и быстрое выполнение задач.

EN

Instructive model based on Qwen-2.5-1.5B-Instruct, trained on the Russian-language dataset GrandMaster-PRO-MAX. Designed for high-efficiency text processing in Russian and English, delivering precise responses and fast task execution.

Transformers

Downloads last month
408
GGUF
Model size
1.54B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF

Base model

Qwen/Qwen2.5-1.5B
Quantized
(7)
this model

Dataset used to train Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF

Spaces using Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF 2