ISHA Call Center QA Model

This model was trained on a finetuned version(StableBeluga2) of Llama2-13B from stabilityai : (StableBeluga2 tops the LLM leaderboard currently)

Dataset Used : https://huggingface.co/datasets/nateshmbhat/isha-qa-text

Train Params used :

  • Base model : stabilityai/StableBeluga-13B
  • Quantization Used : 4 bit
  • Learning rate : 2e-4
  • Batch Size : 2
  • Epochs : 3
  • Trainer : sft
  • Max token length : 2048 (capable of higher token length)

!autotrain llm --train --project_name project-isha-qa --model stabilityai/StableBeluga-13B --data_path nateshmbhat/isha-qa-text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id nateshmbhat/model-isha-qa
Downloads last month
15
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train nateshmbhat/model-isha-qa