CALE
Collection
Models and dataset used in the paper "CALE : Concept-Aligned Embeddings for Both Within-Lemma and Inter-Lemma Sense Differentiation".
•
5 items
•
Updated
•
1
This is a sentence-transformers model: It maps occurences of a word to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
# 1. Load CALE model
model = SentenceTransformer("gabrielloiseau/CALE-MBERT-en")
sentences = [
"the boy could easily <t>distinguish</t> the different note values",
"he patient’s ability to <t>recognize</t> forms and shapes",
"the government had refused to <t>recognize</t> their autonomy and existence as a state",
]
# 2. Calculate embeddings
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# 3. Calculate the embedding similarities
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.8725, 0.5957],
# [0.8725, 1.0000, 0.5861],
# [0.5957, 0.5861, 1.0000]])
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
Base model
answerdotai/ModernBERT-large