CALE-MBERT-en

This is a sentence-transformers model: It maps occurences of a word to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage (Sentence-Transformers)

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer

# 1. Load CALE model
model = SentenceTransformer("gabrielloiseau/CALE-MBERT-en")

sentences = [
    "the boy could easily <t>distinguish</t> the different note values",
    "he patient’s ability to <t>recognize</t> forms and shapes",
    "the government had refused to <t>recognize</t> their autonomy and existence as a state",
]

# 2. Calculate embeddings
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# 3. Calculate the embedding similarities
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.8725, 0.5957],
#        [0.8725, 1.0000, 0.5861],
#        [0.5957, 0.5861, 1.0000]])

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
Downloads last month
10
Safetensors
Model size
395M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gabrielloiseau/CALE-MBERT-en

Finetuned
(180)
this model

Dataset used to train gabrielloiseau/CALE-MBERT-en

Collection including gabrielloiseau/CALE-MBERT-en