metadata
language:
- en
base_model:
- FacebookAI/roberta-large
pipeline_tag: text-classification
Sentence Dating Model
Model Description
The Sentence Dating Model is a fine-tuned RoBERTa-large transformer designed for predicting the decade in which a given sentence was written. This model is trained on historical text data to classify sentences into time periods from 1700 to 2021. It is particularly useful for historical linguistics, text dating, and semantic change studies.
Reference Paper
This model is based on the work described in:
Sense-specific Historical Word Usage Generation
Pierluigi Cassotti, Nina Tahmasebi
University of Gothenburg
[Link to Paper]
Training Details
Base Model
- Model:
roberta-large
- Fine-tuned for: Sentence classification into time periods (1700-2021)
Dataset
The model is trained on a dataset derived from historical text corpora, including examples extracted from the Oxford English Dictionary (OED). The dataset includes:
- Texts: Sentences extracted from historical documents.
- Labels: Time periods (grouped by decades).
Fine-tuning Process
- Tokenizer:
AutoTokenizer.from_pretrained("roberta-large")
- Loss function: CrossEntropy Loss
- Optimizer: AdamW
- Batch size: 32
- Learning rate: 1e-6
- Epochs: 1
- Evaluation Strategy: Steps (every 10% of training data)
- Metric: Weighted F1-score
- Splitting: 90% training, 10% validation
Usage
Example
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("ChangeIsKey/text-dating")
model = AutoModelForSequenceClassification.from_pretrained("ChangeIsKey/text-dating")
# Example text
text = "He put the phone back in the cradle and turned toward the kitchen."
# Tokenize input
inputs = tokenizer(text, return_tensors="pt")
# Predict
with torch.no_grad():
outputs = model(**inputs)
predicted_label = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted decade: {1700 + predicted_label * 10}")
Limitations
- The model may have difficulty distinguishing between closely related time periods (e.g., 1950s vs. 1960s).
- Biases may exist due to the training dataset composition.
- Performance is lower on shorter, contextually ambiguous sentences.
Citation
If you use this model, please cite:
@article{cassotti2025,
author = {Cassotti, Pierluigi and Tahmasebi, Nina},
title = {Sense-specific Historical Word Usage Generation},
journal = {TACL},
year = {2025}
}
License
MIT License