This Model2Vec model was created by using Tokenlearn, with nomic-embed-text-v2-moe as a base.
The output dimension is 768.
Usage
Load this model using the from_pretrained
method:
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("cnmoro/static-nomic-large")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for cnmoro/static-nomic-large
Base model
FacebookAI/xlm-roberta-base
Finetuned
nomic-ai/nomic-xlm-2048
Finetuned
nomic-ai/nomic-embed-text-v2-moe
Evaluation results
- pearson on MTEB Assin2STS (default)test set self-reported64.533
- spearman on MTEB Assin2STS (default)test set self-reported58.746
- cosine_pearson on MTEB Assin2STS (default)test set self-reported64.533
- cosine_spearman on MTEB Assin2STS (default)test set self-reported58.746
- manhattan_pearson on MTEB Assin2STS (default)test set self-reported62.204
- manhattan_spearman on MTEB Assin2STS (default)test set self-reported58.837
- euclidean_pearson on MTEB Assin2STS (default)test set self-reported62.072
- euclidean_spearman on MTEB Assin2STS (default)test set self-reported58.746
- main_score on MTEB Assin2STS (default)test set self-reported58.746
- pearson on MTEB BIOSSES (default)test set self-reported69.056