Edit model card

Mistroll-7B-v2.2-GGUF

Model creator: BarraHome
Original model: Mistroll-7B-v2.2
GGUF quantization: llama.cpp commit 6e472f58e40cd4acf6023e15c75a2700535c5f0b

Description

This model was trained 2x faster with Unsloth and Huggingface's TRL library.

This experiment serves to test and refine a specific training and evaluation pipeline research framework. Its primary objective is to identify potential optimizations, with a focus on data engineering, architectural efficiency, and evaluation performance.

The goal of this experiment is to evaluate the effectiveness of a new training and evaluation pipeline for Large Language Models (LLMs). To achieve this, we will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.

Prompt Template

Following the Mistroll chat template, the prompt template is ChatML.

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Downloads last month
32
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

32-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for mgonzs13/Mistroll-7B-v2.2-GGUF

Quantized
this model