Osmosis Coverage Classifier
This model classifies educational content coverage into three categories: low, fair, and satisfactory.
Model Description
- Base Model: distilbert-base-uncased
- Task: Multi-class text classification
- Classes: low (0), fair (1), satisfactory (2)
- Training Data: Custom osmosis dataset
Performance
- Overall Test Accuracy: 0.7600
- Macro F1 Score: 0.7584
- Macro Precision: 0.7571
- Macro Recall: 0.7600
Per-Class Performance
- Low: Precision: 0.8897, Recall: 0.9240, F1: 0.9065
- Fair: Precision: 0.6577, Recall: 0.6365, F1: 0.6469
- Satisfactory: Precision: 0.7239, Recall: 0.7193, F1: 0.7216
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("KingTechnician/bert-osmosis-coverage-v2")
model = AutoModelForSequenceClassification.from_pretrained("KingTechnician/bert-osmosis-coverage-v2")
# Example usage
objective = "Your learning objective here"
response = "The response text to classify"
inputs = tokenizer(objective, response, return_tensors="pt", padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1).item()
# Map prediction to label
id2label = {0: "low", 1: "fair", 2: "satisfactory"}
predicted_label = id2label[predicted_class]
print(f"Predicted coverage: {predicted_label}")
Training Details
- Learning Rate: 2e-5
- Batch Size: 64
- Epochs: 3
- Weight Decay: 0.01
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support