This Model2Vec model was created by using Tokenlearn, with nomic-embed-text-v2-moe as a base, trained on around 20M passages (english and portuguese).
The output dimension is 50.
This is supposed to be a more minimalistic version of cnmoro/static-nomic-eng-ptbr
Usage
Load this model using the from_pretrained
method:
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("cnmoro/static-nomic-eng-ptbr-tiny")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
- Downloads last month
- 202
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for cnmoro/static-nomic-eng-ptbr-tiny
Base model
FacebookAI/xlm-roberta-base
Finetuned
nomic-ai/nomic-xlm-2048
Finetuned
nomic-ai/nomic-embed-text-v2-moe