Qwen2.5 7B for Gujarati: Continual pre-training only

This model is built on top of Qwen2.5 7B adapted for Gujarati using 500M target language tokens sampled from MADLAD-400.

Model Details

  • Vocabulary: This model has no additional target vocabulary. It retains the original vocabulary of Qwen2.5 7B.
  • Training: This model was continually pre-trained on 500M target language tokens sampled from MADLAD-400.

Model Description

  • Language: Gujarati
  • License: Apache 2.0
  • Fine-tuned from model: Qwen/Qwen2.5-7B

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Qwen2.5-7B-gu-lapt-madlad"
)
tokenizer = AutoTokenizer.from_pretrained(
    "atsuki-yamaguchi/Qwen2.5-7B-gu-lapt-madlad"
)

Citation

@misc{yamaguchi2024vocabularyexpansionchatmodels,
      title={{ElChat}: Adapting Chat Language Models Using Only Target Unlabeled Language Data}, 
      author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
      year={2024},
      eprint={2412.11704},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.11704},
}
Downloads last month
2
Safetensors
Model size
7.62B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for atsuki-yamaguchi/Qwen2.5-7B-gu-lapt-madlad

Base model

Qwen/Qwen2.5-7B
Finetuned
(569)
this model

Dataset used to train atsuki-yamaguchi/Qwen2.5-7B-gu-lapt-madlad

Collection including atsuki-yamaguchi/Qwen2.5-7B-gu-lapt-madlad