Minueza-2-96M-Instruct (Variant 04)

This model is a fine-tuned version of Felladrin/Minueza-2-96M on the English totally-not-an-llm/EverythingLM-data-V2-sharegpt dataset.

Usage

pip install transformers==4.51.1 torch==2.6.0
from transformers import pipeline, TextStreamer
import torch

generate_text = pipeline(
    "text-generation",
    model="Felladrin/Minueza-2-96M-Instruct-Variant-04",
    device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)

messages = [
    {
        "role": "user",
        "content": "How to become a healthier person?",
    },
]

generate_text(
    generate_text.tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    ),
    streamer=TextStreamer(generate_text.tokenizer, skip_special_tokens=True),
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    top_k=0,
    min_p=0.1,
    repetition_penalty=1.17,
)

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.8e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Framework versions

  • Transformers 4.51.1
  • Pytorch 2.6.0
  • Datasets 3.4.1
  • Tokenizers 0.21.0

License

This model is licensed under the Apache License 2.0.

Downloads last month
0
Safetensors
Model size
96M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Felladrin/Minueza-2-96M-Instruct-Variant-04

Finetuned
(5)
this model
Quantizations
1 model

Dataset used to train Felladrin/Minueza-2-96M-Instruct-Variant-04

Collection including Felladrin/Minueza-2-96M-Instruct-Variant-04