XLM-RoBERTa for multilingual spam detection

I trained this model to detect spam in german as there is no german labeled spam mail dataset, and I could not find an already pretrained multilingual model for the enron spam dataset.

Intended use

Identifying spam mail in any XLM-RoBERTa-supported language. Note that there was no thorough testing on it's intended use - only validation on the enron mail dataset.

Evaluation

Eval on test set of enron spam:

  • loss: 0.0315
  • accuracy: 0.996
Downloads last month
8
Safetensors
Model size
278M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train kauffinger/xlm-roberta-base-finetuned-enron