atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
56794e8 verified
metadata
license: apache-2.0
datasets:
  - allenai/MADLAD-400
language:
  - ta
base_model:
  - Qwen/Qwen2.5-7B-Instruct
library_name: transformers

Qwen2.5 7B Instruct for Tamil: Continual pre-training only

This model is built on top of Qwen2.5 7B Instruct adapted for Tamil using 500M target language tokens sampled from MADLAD-400.

Model Details

  • Vocabulary: This model has no additional target vocabulary. It retains the original vocabulary of Qwen2.5 7B Instruct.
  • Training: This model was continually pre-trained on 500M target language tokens sampled from MADLAD-400.

Model Description

  • Language: Tamil
  • License: Apache 2.0
  • Fine-tuned from model: Qwen/Qwen2.5-7B-Instruct

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Qwen2.5-7B-Instruct-ta-lapt-madlad"
)
tokenizer = AutoTokenizer.from_pretrained(
    "Qwen/Qwen2.5-7B-Instruct"
)

Citation

@misc{yamaguchi2024vocabularyexpansionchatmodels,
      title={{ElChat}: Adapting Chat Language Models Using Only Target Unlabeled Language Data}, 
      author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
      year={2024},
      eprint={2412.11704},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.11704},
}