You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

πŸ’Ό TallyPrimeAssistant β€” Distilled GPT-2 Model

This is a distilled GPT-2-based conversational model fine-tuned on FAQs and navigation instructions from TallyPrime, a leading business accounting software used widely in India. The model is designed to help users get quick and accurate answers about using features in TallyPrime like GST, e-invoicing, payroll, and more.


🧠 Model Summary

  • Teacher Model: gpt2-large
  • Student Model: distilgpt2
  • Distillation Method: Knowledge Distillation using Hugging Face's Transformers and custom training pipeline
  • Training Dataset: Internal dataset of Q&A pairs and system navigation steps from TallyPrime documentation and usage
  • Format: safetensors (secure and fast)
  • Tokenizer: Byte-Pair Encoding (BPE), same as GPT-2

πŸš€ Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("Jayanthram/TallyPrimeAssistant")
tokenizer = AutoTokenizer.from_pretrained("Jayanthram/TallyPrimeAssistant")

prompt = "How to enable GST in Tally Prime?"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=60)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Downloads last month
43
Safetensors
Model size
81.9M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Jayanthram/TallyPrimeAssistant

Finetuned
(104)
this model
Quantizations
1 model