PhoBERT-Spam-MultiClass

Fine-tuned from vinai/phobert-base on ViSpamReviews (multi-class).

  • Task: 4-way classification

  • Dataset: ViSpamReviews

  • Hyperparameters

    • Batch size: 32
    • LR: 3e-5
    • Epochs: 100
    • Max seq len: 256

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("visolex/phobert-spam-classification")
model = AutoModelForSequenceClassification.from_pretrained("visolex/phobert-spam-classification")

text = "Chỉ PR thương hiệu chứ không review thật."
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256)
pred = model(**inputs).logits.argmax(dim=-1).item()
label_map = {0: "NO-SPAM",1: "SPAM-1",2: "SPAM-2",3: "SPAM-3"}
print(label_map[pred])
Downloads last month
26
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for visolex/phobert-spam-classification

Base model

vinai/phobert-base
Finetuned
(111)
this model

Dataset used to train visolex/phobert-spam-classification

Collection including visolex/phobert-spam-classification

Evaluation results