--- language: - en license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5822 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: nomic-ai/modernbert-embed-base widget: - source_sentence: >- this information about the two documents withheld in part under the deliberative-process 128 privilege in No. 11-444, see First Lutz Decl. Ex. DD at 17, 141, but the descriptions of the decisionmaking authority are generic, stating that the withheld information is a “recommendation from the [FOIA] analyst to his/her supervisor,” id. at 17, and a “recommendation from the sentences: - What did the plaintiff assert about the CIA's inaccurate representations? - >- What type of document is mentioned as an exhibit in conjunction with the withheld documents? - >- ¿Qué ámbito jurisdiccional es mencionado en el contexto de derechos sobre la propia imagen? - source_sentence: |- Artificial Intelligence, Corp. y que prestaba servicios mediante contrato para el Departamento de Producción de tal corporación. Adujo que, no se encontraba en la obligación de solicitar autorización a la parte apelada para utilizar su imagen, ya que se le había pagado por la producción de múltiples videos publicitarios para el uso de las empresas. Luego de varias incidencias sentences: - Who has the burden to provide a sufficient record on appeal? - >- ¿Para qué departamento prestaba servicios Artificial Intelligence, Corp. según el contrato? - What section numbers are referenced for further information? - source_sentence: >- submission by protégé firms. SHS MJAR at 28–30; VCH MJAR at 28–30 (same). This, Plaintiffs contend, violates Section 125.8(e) because it purportedly subjects protégés to heightened evaluation criteria as compared to offerors generally and makes it harder for mentor-protégé JVs to compete against more experienced firms with larger portfolios of past work. SHS MJAR at 28– sentences: - >- On what date were the plaintiff's petition, complaint, and trial court's order filed? - What section do Plaintiffs contend is violated? - What is the amount of pages the party seeks to withhold? - source_sentence: >- Beginning with the CIA’s submissions, the CIA states in its declaration submitted in No. 11-445 that “[s]ome of the records for which information has been withheld pursuant to Exemption (b)(5) contain confidential communications between CIA staff and attorneys within the CIA’s Office of General Counsel about the processing of certain FOIA requests.” Third Lutz sentences: - >- What is the subject of the confidential communications mentioned in the document? - >- Which rule number is associated with the responsibilities regarding nonlawyer assistants? - ¿Qué número de referencia tiene el documento? - source_sentence: >- contracting/contracting-assistance-programs/sba-mentor-protege-program (last visited Apr. 19, 2023). 5 protégé must demonstrate that the added mentor-protégé relationship will not adversely affect the development of either protégé firm (e.g., the second firm may not be a competitor of the first firm).” 13 C.F.R. § 125.9(b)(3). sentences: - >- What discretion do district courts have regarding a defendant’s invocation of FOIA exemptions? - What must the protégé demonstrate about the mentor-protégé relationship? - Which exemptions are mentioned in relation to the plaintiff's accusations? pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: ModernBERT Embed base Legal Matryoshka results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.5285935085007728 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.5718701700154559 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6646058732612056 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7310664605873262 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5285935085007728 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.5141679546625451 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.3941267387944359 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.2329211746522411 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.17877382792375063 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4894384337970118 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.6120556414219475 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.7184441009788768 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6300476733345887 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5741100561811532 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6186392686743281 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.5162287480680062 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.5486862442040186 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6414219474497682 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7171561051004637 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5162287480680062 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.4981968057702215 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.38083462132921175 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.22720247295208656 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.17400824317362185 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.47346728490468826 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.5910613086038125 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.702344152498712 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6137901932050573 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5592913569343243 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6021884440021203 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.482225656877898 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.5285935085007728 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.598145285935085 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.678516228748068 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.482225656877898 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.46986089644513135 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.35857805255023184 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.21468315301391033 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.16267387944358577 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.4492529623905203 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.5569294178258629 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.6642194744976816 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.5781404945062661 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5249122936139936 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5698418441661705 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.41576506955177744 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.4435857805255023 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5363214837712519 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.6105100463678517 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.41576506955177744 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3992787223080887 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.31282843894899537 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.19242658423493045 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.14258114374034003 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.3835651725914477 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.48776403915507466 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5963420917053066 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.5108672198469205 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4573213365717227 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5029873598412773 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.312210200927357 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.3508500772797527 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.43585780525502316 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.47913446676970634 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.312210200927357 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3091190108191654 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.250386398763524 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.14976816074188565 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.10497166409067489 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.2954662545079856 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3930963420917053 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.46805770221535287 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.39563928784117025 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3508985304580356 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.3939277813526489 name: Cosine Map@100 datasets: - AdamLucek/legal-rag-positives-synthetic --- # ModernBERT Embed base Legal Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the [AdamLucek/legal-rag-positives-synthetic](https://huggingface.co/datasets/AdamLucek/legal-rag-positives-synthetic) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [AdamLucek/legal-rag-positives-synthetic](https://huggingface.co/datasets/AdamLucek/legal-rag-positives-synthetic) - **Language:** en - **License:** apache-2.0 ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("AdamLucek/ModernBERT-embed-base-legal-MRL") # Run inference sentences = [ 'contracting/contracting-assistance-programs/sba-mentor-protege-program (last visited Apr. 19, \n2023). \n5 \n \nprotégé must demonstrate that the added mentor-protégé relationship will not adversely affect the \ndevelopment of either protégé firm (e.g., the second firm may not be a competitor of the first \nfirm).” 13 C.F.R. § 125.9(b)(3).', 'What must the protégé demonstrate about the mentor-protégé relationship?', 'What discretion do district courts have regarding a defendant’s invocation of FOIA exemptions?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:---------|:-----------|:-----------|:-----------|:-----------| | cosine_accuracy@1 | 0.5286 | 0.5162 | 0.4822 | 0.4158 | 0.3122 | | cosine_accuracy@3 | 0.5719 | 0.5487 | 0.5286 | 0.4436 | 0.3509 | | cosine_accuracy@5 | 0.6646 | 0.6414 | 0.5981 | 0.5363 | 0.4359 | | cosine_accuracy@10 | 0.7311 | 0.7172 | 0.6785 | 0.6105 | 0.4791 | | cosine_precision@1 | 0.5286 | 0.5162 | 0.4822 | 0.4158 | 0.3122 | | cosine_precision@3 | 0.5142 | 0.4982 | 0.4699 | 0.3993 | 0.3091 | | cosine_precision@5 | 0.3941 | 0.3808 | 0.3586 | 0.3128 | 0.2504 | | cosine_precision@10 | 0.2329 | 0.2272 | 0.2147 | 0.1924 | 0.1498 | | cosine_recall@1 | 0.1788 | 0.174 | 0.1627 | 0.1426 | 0.105 | | cosine_recall@3 | 0.4894 | 0.4735 | 0.4493 | 0.3836 | 0.2955 | | cosine_recall@5 | 0.6121 | 0.5911 | 0.5569 | 0.4878 | 0.3931 | | cosine_recall@10 | 0.7184 | 0.7023 | 0.6642 | 0.5963 | 0.4681 | | **cosine_ndcg@10** | **0.63** | **0.6138** | **0.5781** | **0.5109** | **0.3956** | | cosine_mrr@10 | 0.5741 | 0.5593 | 0.5249 | 0.4573 | 0.3509 | | cosine_map@100 | 0.6186 | 0.6022 | 0.5698 | 0.503 | 0.3939 | ## Training Details #### [AdamLucek/legal-rag-positives-synthetic](https://huggingface.co/datasets/AdamLucek/legal-rag-positives-synthetic) * Dataset: [AdamLucek/legal-rag-positives-synthetic](https://huggingface.co/datasets/AdamLucek/legal-rag-positives-synthetic) * Size: 5,822 training samples * Columns: positive and anchor * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | positive | anchor | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------| | infrastructure security information,” the information at issue must, “if disclosed . . . reveal vulner-
abilities in Department of Defense critical infrastructure.” 10 U.S.C. § 130e(f). The closest the
Department comes is asserting that the information “individually or in the aggregate, would enable
| What type of information must reveal vulnerabilities if disclosed? | | they have bid.” Oral Arg. Tr. at 42:18–20. Plaintiffs also assert that, should this Court require the
Polaris Solicitations to consider price at the IDIQ level, such an adjustment “adds a solicitation
requirement that would necessarily change the overall structure of the evaluation” GSA must
perform in awarding the IDIQ contracts. Oral Arg. Tr. at 43:3–5; see supra Discussion Section
| Where in the document can further discussion about the assertion be found? | | otra parte. Fernández v. San Juan Cement Co., Inc., 118 DPR 713,
718-719 (1987). Nuestro más Alto Foro ha dispuesto que, la
facultad de imponer honorarios de abogados es la mejor arma que

22 Id.
23 Andamios de PR v. Newport Bonding, 179 DPR 503, 520 (2010); Pérez Rodríguez
v. López Rodríguez, supra; SLG González -Figueroa v. Pacheco Romero, supra;
| What case is cited with the reference number 118 DPR 713? | * Loss: [MatryoshkaLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 0.8791 | 10 | 5.6528 | - | - | - | - | - | | 1.0 | 12 | - | 0.5926 | 0.5753 | 0.5457 | 0.4687 | 0.3455 | | 1.7033 | 20 | 2.4543 | - | - | - | - | - | | 2.0 | 24 | - | 0.6195 | 0.6066 | 0.5778 | 0.4998 | 0.3828 | | 2.5275 | 30 | 1.7455 | - | - | - | - | - | | 3.0 | 36 | - | 0.6292 | 0.6135 | 0.5765 | 0.5057 | 0.3928 | | 3.3516 | 40 | 1.5499 | - | - | - | - | - | | **3.7033** | **44** | **-** | **0.63** | **0.6138** | **0.5781** | **0.5109** | **0.3956** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.3.1 - Transformers: 4.48.0 - PyTorch: 2.5.1+cu121 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```