Edit model card

GEITje-7B-chat-v2

πŸ€–οΈ Try the chat model in πŸ€— Hugging Face Spaces!

GEITje-7B

GEITje is a large open Dutch language model with 7 billion parameters, based on Mistral 7B. It has been further trained on 10 billion tokens of Dutch text. This has improved its Dutch language skills and increased its knowledge of Dutch topics.

Model description

Mistral – Base Model

GEITje is based on Mistral 7B. It's a large open language model with 7 billion parameters, trained by Mistral AI. According to Mistral AI, the 7B model performs better than Llama 2 13B on all (English-language) benchmarks they tested it on. Mistral 7B has been released under the Apache 2.0 open source license.

GEITje – Trained Further on Dutch Texts

GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the Dutch Gigacorpus and the MADLAD-400 web crawling corpus. It is a so-called full-parameter finetune: performed on all parameters. It is not a PEFT or LoRA finetune. Like Mistral, GEITje has a context length of 8,192 tokens.

GEITje-chat – Finetuned for Dialogues

As a demonstration of GEITje's capabilities for chat applications, two initial chat variants of GEITje have also been finetuned: GEITje-chat and GEITje-chat-v2. They can follow instructions, answer questions, and hold dialogues on a variety of topics.

More info

Read more about GEITje-chat in the πŸ“„ README on GitHub.

Checkpoints

An intermediate checkpoint is available in the checkpoints branch.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.7832 0.05 609 0.8844
0.6904 0.1 1218 0.8698
0.8195 0.15 1827 0.8583
0.7463 0.2 2436 0.8475
0.6739 0.25 3045 0.8395
0.7604 0.3 3654 0.8332
0.8024 0.35 4263 0.8261
0.6881 0.4 4872 0.8203
0.6466 0.45 5481 0.8167
0.7042 0.5 6090 0.8121
0.702 0.55 6699 0.8081
0.7255 0.6 7308 0.8054
0.7558 0.65 7917 0.8036
0.7587 0.7 8526 0.8022
0.9217 0.75 9135 0.8016
0.6938 0.8 9744 0.8011
0.6962 0.85 10353 0.8011
0.664 0.9 10962 0.8011
0.6544 0.95 11571 0.8011
0.6782 1.0 12180 0.8011

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.1+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
689
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Rijgersberg/GEITje-7B-chat-v2

Finetuned
(12)
this model
Finetunes
3 models
Merges
12 models

Datasets used to train Rijgersberg/GEITje-7B-chat-v2

Spaces using Rijgersberg/GEITje-7B-chat-v2 2

Collection including Rijgersberg/GEITje-7B-chat-v2