GPT2 GammaCorpus v2 100k

This is a GPT-2 language model fine-tuned on the GammaCorpus v2 - 100k dataset, which consists of 100,000 structured user-assistant conversational pairs. The model was initialised from the pretrained gpt2 weights and trained for 2 epochs using maximum sequence length 256, batch size 2 (with gradient accumulation) and a learning rate of 5e-5. The tokenizer used is the original GPT-2 tokenizer with the EOS token also used as the pad token. The training objective was causal language modeling.

Link to training dataset: https://huggingface.co/datasets/rubenroy/GammaCorpus-v2-100k

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train rubenroy/GPT2-GCv2-100k