zpn commited on
Commit
d0f89cf
·
1 Parent(s): f6a8873

fix: add paper

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -114,8 +114,10 @@ language:
114
 
115
  # nomic-embed-text-v2-moe: Multilingual Mixture of Experts Text Embeddings
116
 
 
 
117
  ## Model Overview
118
- `nomic-embed-text-v2-moe` is SoTA multilingual MoE text embedding model that excels at multilingual retrieval:
119
 
120
  - **High Performance**: SoTA Multilingual performance compared to ~300M parameter models, competitive with models 2x in size
121
  - **Multilinguality**: Supports ~100 languages and trained on over 1.6B pairs
@@ -134,7 +136,9 @@ language:
134
  | Arctic Embed v2 Large | 568 | 1024 | **55.65** | 66.00 | ❌ | ❌ | ❌ |
135
  | mE5 Large | 560 | 1024 | 51.40 | 66.50 | ❌ | ❌ | ❌ |
136
 
 
137
 
 
138
 
139
  ## Model Architecture
140
  - **Total Parameters**: 475M
@@ -270,4 +274,4 @@ If you find the model, dataset, or training code useful, please cite our work
270
  primaryClass={cs.CL},
271
  url={https://arxiv.org/abs/2502.07972},
272
  }
273
- ```
 
114
 
115
  # nomic-embed-text-v2-moe: Multilingual Mixture of Experts Text Embeddings
116
 
117
+ This model was presented in the paper [Training Sparse Mixture Of Experts Text Embedding Models](https://huggingface.co/papers/2502.07972).
118
+
119
  ## Model Overview
120
+ `nomic-embed-text-v2-moe` is a SoTA multilingual MoE text embedding model that excels at multilingual retrieval:
121
 
122
  - **High Performance**: SoTA Multilingual performance compared to ~300M parameter models, competitive with models 2x in size
123
  - **Multilinguality**: Supports ~100 languages and trained on over 1.6B pairs
 
136
  | Arctic Embed v2 Large | 568 | 1024 | **55.65** | 66.00 | ❌ | ❌ | ❌ |
137
  | mE5 Large | 560 | 1024 | 51.40 | 66.50 | ❌ | ❌ | ❌ |
138
 
139
+ ## Paper Abstract
140
 
141
+ Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe in retrieval-augmented generation (RAG) applications, where large models' increased memory requirements constrain dataset ingestion capacity, and their higher latency directly impacts query-time performance. While causal language models have addressed similar efficiency challenges using Mixture of Experts (MoE) architectures, this approach hasn't been successfully adapted to the general text embedding setting. In this paper, we introduce Nomic Embed v2, the first general purpose MoE text embedding model. Our model outperforms models in the same parameter class on both monolingual and multilingual benchmarks while also maintaining competitive performance with models twice its size. We open-source all code, models, and evaluation data to ensure full reproducibility of our training pipeline at https://github.com/nomic-ai/contrastors.
142
 
143
  ## Model Architecture
144
  - **Total Parameters**: 475M
 
274
  primaryClass={cs.CL},
275
  url={https://arxiv.org/abs/2502.07972},
276
  }
277
+ ```