Cross-Encoder for Semantic Textual Similarity

This model was trained using SentenceTransformers Cross-Encoder class.

Training Data

This model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.

Usage and Performance

Pre-trained models can be used like this:

from sentence_transformers import CrossEncoder

model = CrossEncoder('cross-encoder/stsb-roberta-large')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])

The model will predict scores for the pairs ('Sentence 1', 'Sentence 2') and ('Sentence 3', 'Sentence 4').

You can use this model also without sentence_transformers and by just using Transformers AutoModel class

Downloads last month
123,150
Safetensors
Model size
355M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cross-encoder/stsb-roberta-large

Finetuned
(338)
this model
Finetunes
4 models

Dataset used to train cross-encoder/stsb-roberta-large

Spaces using cross-encoder/stsb-roberta-large 2