Fietje banner

Fietje 2 Chat

An open and efficient LLM for Dutch

πŸ‘±β€β™€οΈ Base version - πŸ€– Instruct version - πŸ’¬ Chat version (this one) - πŸš€ GGUF of Chat

Chat with Fietje here!

This is the chat version of Fietje, a DPO-tuned (aligned) continuation on the instruct version. Fietje is an adapated version of microsoft/phi-2, tailored to Dutch text generation by training on 28B tokens. It is small and efficient with a size of 2.7 billion parameters while performing almost on par with more powerful Dutch LLMs of twice its size like GEITje 7B Ultra.

A thorough description of the creation and evaluation of Fietje as well as usage examples are available in this Github repository.

Citation

If you use Fietje or the CulturaX + Wikipedia filtered subset in your work, please cite to the following paper:

@misc{vanroy2024fietjeopenefficientllm,
      title={Fietje: An open, efficient LLM for Dutch}, 
      author={Bram Vanroy},
      year={2024},
      eprint={2412.15450},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.15450}, 
}

Intended uses & limitations

The same limitations as phi-2, and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!

Training and evaluation data

Fietje 2 Chat was finetuned from the instruct model on the following datasets. Number of training samples per dataset given in brackets, totalling 18,653 samples.

A lot of different learning rates, beta, en batch sizes were investigated in search of a converging combination. You can find them all in the W&B runs.

Training procedure

I am thankful to the Flemish Supercomputer Center (VSC) for providing the computational power to accomplish this project. Accounting for waiting for jobs, training a single run took around nine hours on one A100 80GB.

Training was done with the wonderful alignment-handbook, using DeepSpeed as a back-end. Exact training recipes and SLURM script are given in the Github repository.

Training hyperparameters

The following hyperparameters were used during training:

  • beta: 0.2
  • learning_rate: 2e-06
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.2515 1.0 1166 0.2842 -1.1549 -3.6363 0.8867 2.4815 -657.6813 -451.3364 -1.2868 -1.3528

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
912
Safetensors
Model size
2.78B params
Tensor type
BF16
Β·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for BramVanroy/fietje-2-chat

Base model

microsoft/phi-2
Finetuned
(3)
this model
Adapters
1 model
Quantizations
3 models

Datasets used to train BramVanroy/fietje-2-chat

Space using BramVanroy/fietje-2-chat 1

Collection including BramVanroy/fietje-2-chat