File size: 1,889 Bytes
79a7804 ac3a8f2 b3a9fce 79a7804 b3a9fce 79a7804 b3a9fce 79a7804 ac3a8f2 79a7804 b3a9fce 79a7804 b3a9fce 79a7804 b3a9fce 79a7804 b3a9fce ac3a8f2 79a7804 b3a9fce 79a7804 5f0a744 79a7804 b3a9fce 79a7804 b3a9fce 79a7804 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
library_name: transformers
tags:
- text-classification
- customer-support
---
# Model Card for Ticket Classifier
A fine-tuned DistilBERT model that automatically classifies customer support tickets into four categories: Billing Question, Feature Request, General Inquiry, and Technical Issue.
## Model Details
### Model Description
This model is a fine-tuned version of `distilbert-base-uncased` that has been trained to classify customer support tickets into predefined categories. It can help support teams automatically route tickets to the appropriate department.
- **Developed by:** [Your Name/Organization]
- **Model type:** Text Classification (DistilBERT)
- **Language(s):** English
- **License:** [Your License]
- **Finetuned from model:** `distilbert-base-uncased`
## Uses
### Direct Use
This model can be directly used to classify incoming customer support tickets. It takes a text description of the customer's issue and classifies it into one of four categories:
- Billing Question (0)
- Feature Request (1)
- General Inquiry (2)
- Technical Issue (3)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Define class mapping
id_to_label = {0: 'Billing Question', 1: 'Feature Request', 2: 'General Inquiry', 3: 'Technical Issue'}
# Load model and tokenizer
YOUR_MODEL_PATH = 'Dragneel/Ticket-classification-model'
tokenizer = AutoTokenizer.from_pretrained("YOUR_MODEL_PATH")
model = AutoModelForSequenceClassification.from_pretrained("YOUR_MODEL_PATH")
# Prepare input
text = "I was charged twice for my subscription this month"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Run inference
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted class: {id_to_label[prediction]}") |