You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Qwen3 0.6B Base - Ita ๐Ÿ‡ฎ๐Ÿ‡น

This model is a further-pretrained version of Qwen3-0.6B-Base ๐Ÿš€, specifically trained on 2 billion Italian tokens. The training data includes educational content ๐Ÿ“š carefully filtered from multilingual pre-training datasets. This ensures the model has a strong understanding of the Italian language and its nuances. It also boasts an extended tokenizer โœ๏ธ optimized for Italian.

โš ๏ธ Important Note: This is an experimental model. It may generate content that is dangerous or includes personal information. Please use with caution.

Base Model (Not Instruct) ๐Ÿค–

This is not an instruct model, meaning it doesn't follow a specific chat template. Instead, it's designed to be fine-tuned for your specific use case ๐ŸŽฏ with the Italian language.

Evaluation Results ๐Ÿ“Š

Here's a breakdown of the model's performance on various tasks:

Tasks Version Filter n-shot Metric Value Stderr
arc_it 2 none 0 acc โ†‘ 0.2566 ยฑ 0.0128
none 0 acc_norm โ†‘ 0.2840 ยฑ 0.0132
hellaswag_it 1 none 0 acc โ†‘ 0.3363 ยฑ 0.0049
none 0 acc_norm โ†‘ 0.3994 ยฑ 0.0051
m_mmlu_it 0 none 5 acc โ†‘ 0.2699 ยฑ 0.0039

How to use this model

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "ReDiX/Qwen-0.6B-Base-ITA"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"
).eval()

text = "La principale causa del raffreddore"
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=128
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 


content = tokenizer.decode(output_ids[0:], skip_special_tokens=True).strip("\n")


print("content:", content)
Downloads last month
-
Safetensors
Model size
603M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ReDiX/Qwen-0.6B-Base-ITA

Finetuned
(319)
this model

Dataset used to train ReDiX/Qwen-0.6B-Base-ITA