metadata
tags:
- text-classification
library_name: transformers
datasets:
- I77/valid_invalid_questions
language:
- ru
metrics:
- accuracy
- f1
base_model:
- DeepPavlov/rubert-base-cased
RuBERT Model for Logical Question Classification
This model classifies questions as logically correct or incorrect.
1 - logically correct
0 - logically incorrect
Metrics on validation
Accuracy: 0.9500 Precision: 1.0000 Recall: 0.9000 F1-score: 0.9474
Example inference
from transformers import pipeline
classifier = pipeline('text-classification', model='I77/question_classifier')
print(classifier('Этот вопрос логичен?'))