Model Card for SmolLM2-360M-Instruct LoRA (finetuned)

ENGLISH: This model is a fine-tuned version of HuggingFaceTB/SmolLM2-360M-Instruct using LoRA. It was trained on 550 example entries, focused on generative language tasks in an instructive style. The resulting model is designed to remain highly efficient on resource-limited devices, with a focus on character simulation and conversational tasks.

ESPAÑOL: Este modelo es una versión ajustada de HuggingFaceTB/SmolLM2-360M-Instruct utilizando fine-tuning LoRA. Ha sido entrenado con 550 entradas de ejemplo, enfocadas en tareas de lenguaje generativo con estilo instructivo. El modelo resultante busca mantener una alta eficiencia en dispositivos con recursos limitados, con enfoque en tareas conversacionales de simulación de personajes.

Model Details

Model Description

  • Developed by: ElMagoRubio
  • Model type: Causal Language Model
  • Language(s): Español (principal)
  • License: ENGLISH: This model is a LoRA fine-tuned version of HuggingFaceTB/SmolLM2-360M-Instruct and is distributed under the same Apache 2.0 license. ESPAÑOL:
  • Finetuned from model: HuggingFaceTB/SmolLM2-360M-Instruct

Model Sources

  • Repository: ElMagoRubio

Uses

Direct Use

ENGLISH: Intended for text generation tasks in Spanish, especially in environments where lightweight and efficient models are required. This model is currently under training.

ESPAÑOL: Pensado para tareas de generación de texto en español, especialmente en entornos donde se requieren modelos ligeros y eficientes. Este modelo está en proceso de entrenamiento.

Downstream Use

ENGLISH: This model is integrated into the interactive role-playing game Words & Swords, which is still in development. It is part of a Final Project for the Universidad de Granada (UGR)

ESPAÑOL: Este modelo se integra en el juego de rol interactivo "Word & Swords", aún en desarrollo. Forma parte de un TFG para la Universidad de Granada (UGR)

Out-of-Scope Use

ENGLISH: Not designed for complex multilingual tasks, numerical data processing, or deep logical reasoning.

ESPAÑOL: No está diseñado para tareas multilingües complejas, procesamiento de datos numéricos o razonamiento lógico profundo.

Bias, Risks, and Limitations

ENGLISH: The model may reflect biases present in the training data. It should not be used in critical contexts without human review.

ESPAÑOL: El modelo puede reflejar sesgos presentes en los datos de entrenamiento. No debe utilizarse en contextos críticos sin revisión humana.

Recommendations

ENGLISH: Evaluate and audit sensitive outputs. Do not use in medical, legal, or financial contexts without specialized validation.

ESPAÑOL: Evaluar y auditar salidas sensibles. No usar en contextos médicos, legales o financieros sin validación especializada.

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "ElMagoRubio/SmolLM2-360M-Instruct-lora"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
Downloads last month
34
Safetensors
Model size
362M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ElMagoRubio/SmolLM2-360M-Instruct-lora

Adapter
(16)
this model