RefalMachine's picture
Update README.md
89f4399 verified
|
raw
history blame
757 Bytes
metadata
base_model: >-
  RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_full_lr2e4_bs256
datasets:
  - IlyaGusev/saiga_scored
language:
  - ru
  - en

Model description

LoRa tuned version of ruadapt llama 3 8B with extended tokenizer after LEP (Learned Embedding Propagation, paper will be soon) procedure on saiga_scored d7 dataset.

Thanks to the extended tokenizer, the model works more efficiently with the Russian language.

How to cite:

Tikhomirov M., Chernyshev D. Facilitating large language model Russian adaptation with Learned Embedding Propagation // 2024 (will be soon)

Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //2023 Ivannikov Ispras Open Conference (ISPRAS). – IEEE, 2023. – С. 163-168.