Qwen2.5-1.5B-Sign

Introduction

Qwen2.5-Sign is a text-to-chinese-sign model base on Qwen2.5

Finetune Details

Parameter Value
learning_rate 5e-05
train_batch_size 4
eval_batch_size 4
gradient_accumulation_steps 8
total_train_batch_size 32
lr_scheduler_type cosine
lr_scheduler_warmup_steps 100
num_epochs 4

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"  # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "thundax/Qwen2.5-1.5B-Sign",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("thundax/Qwen2.5-1.5B-Sign")

text = "站一个制高点看上海,上海的弄堂是壮观的景象。它是这城市背景一样的东西。"
input_text = f'Translate sentence into labels\n{text}\n'
model_inputs = tokenizer([input_text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Citation

If you find our work helpful, feel free to give us a cite.

@software{qwen2-sign,
  author = {thundax},
  title = {qwen2-sign: A Tool for Text to Sign},
  year = {2025},
  url = {https://github.com/thundax-lyp},
}
Downloads last month
5
Safetensors
Model size
494M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for thundax/Qwen2.5-0.5B-Sign

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(175)
this model

Collection including thundax/Qwen2.5-0.5B-Sign