Model Details

Model Description

Model is Fine Tuned version of "Bertug1911/BrtGPT-124m" (Fine tuning is for: More quality at generating text!, QA is cooimg soon!)

  • Developed by: Bertug Gunel
  • Funded by [optional]: Nobody
  • Shared by [optional]: Nobody
  • Model type: "gpt2"
  • Language(s) (NLP): En (en, EN, English, ENGLISH)
  • License: "Creative Commons Attribution Non Commercial 4.0 International"
  • Finetuned from model [optional]: Bertug1911/BrtGPT-124m-Base

Model Sources [optional]

  • Repository: NOT AVAILABLE
  • Paper [optional]: NOT AVAILABLE
  • Demo [optional]: Model is already a demo!

Uses

Direct Use

Direct use: "Hugging Face Space" link: "Cooming Soon!"

Out-of-Scope Use

Don't use for: Math, Code. (You can test this model with benchmarks!)

Bias, Risks, and Limitations

It can generates: Political tests. USE AT OWN RISK

Training Details

Training Data

Model trained 10+ epochs, loss: 0.3-0.4

Training Procedure

FP16

Model Card Contact

For contact: "[email protected]" or "[email protected]"

Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
87M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Bertug1911/BrtGPT-124m-FineTuned

Finetuned
(1)
this model