ModernBERT Embed base Legal Matryoshka

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("PhilLel/modernbert-embed-base-legal-matryoshka-2")
# Run inference
sentences = [
    'No. 11-445, ECF No. 52-1; id. Ex. B at 1, No. 11-445, ECF No. 52-1.  On December 8, 2009, the \nplaintiff limited the scope of this request by notifying the CIA that it could “limit [its] search for \nrequests submitted by Michael Ravnitzky to only requests submitted in 2006 and 2009” and that \nit could “limit [its] search to the last four years in which requests were received from [each]',
    'Whose requests did the CIA specifically limit its search to?',
    'How is the document listed in the Vaughn index?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5703
cosine_accuracy@3 0.6244
cosine_accuracy@5 0.6924
cosine_accuracy@10 0.7743
cosine_precision@1 0.5703
cosine_precision@3 0.5456
cosine_precision@5 0.4145
cosine_precision@10 0.2402
cosine_recall@1 0.2056
cosine_recall@3 0.5335
cosine_recall@5 0.6562
cosine_recall@10 0.7582
cosine_ndcg@10 0.6719
cosine_mrr@10 0.6179
cosine_map@100 0.6567

Information Retrieval

Metric Value
cosine_accuracy@1 0.5564
cosine_accuracy@3 0.6198
cosine_accuracy@5 0.6971
cosine_accuracy@10 0.7573
cosine_precision@1 0.5564
cosine_precision@3 0.5358
cosine_precision@5 0.4105
cosine_precision@10 0.2369
cosine_recall@1 0.2007
cosine_recall@3 0.5263
cosine_recall@5 0.6528
cosine_recall@10 0.7465
cosine_ndcg@10 0.6616
cosine_mrr@10 0.6062
cosine_map@100 0.6465

Information Retrieval

Metric Value
cosine_accuracy@1 0.541
cosine_accuracy@3 0.5765
cosine_accuracy@5 0.6538
cosine_accuracy@10 0.728
cosine_precision@1 0.541
cosine_precision@3 0.5085
cosine_precision@5 0.3839
cosine_precision@10 0.2263
cosine_recall@1 0.1941
cosine_recall@3 0.4988
cosine_recall@5 0.6115
cosine_recall@10 0.7134
cosine_ndcg@10 0.6311
cosine_mrr@10 0.5806
cosine_map@100 0.6193

Information Retrieval

Metric Value
cosine_accuracy@1 0.4699
cosine_accuracy@3 0.5147
cosine_accuracy@5 0.5858
cosine_accuracy@10 0.6646
cosine_precision@1 0.4699
cosine_precision@3 0.4487
cosine_precision@5 0.3419
cosine_precision@10 0.2054
cosine_recall@1 0.1691
cosine_recall@3 0.4417
cosine_recall@5 0.5471
cosine_recall@10 0.6506
cosine_ndcg@10 0.5656
cosine_mrr@10 0.513
cosine_map@100 0.5535

Information Retrieval

Metric Value
cosine_accuracy@1 0.3679
cosine_accuracy@3 0.4019
cosine_accuracy@5 0.4776
cosine_accuracy@10 0.5564
cosine_precision@1 0.3679
cosine_precision@3 0.3498
cosine_precision@5 0.2751
cosine_precision@10 0.1711
cosine_recall@1 0.1311
cosine_recall@3 0.3422
cosine_recall@5 0.4347
cosine_recall@10 0.5415
cosine_ndcg@10 0.4577
cosine_mrr@10 0.4075
cosine_map@100 0.4494

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 5,822 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 28 tokens
    • mean: 96.98 tokens
    • max: 157 tokens
    • min: 8 tokens
    • mean: 16.79 tokens
    • max: 41 tokens
  • Samples:
    positive anchor
    After the bench conference concluded, the following exchange occurred between
    the prosecutor and Mr. Zimmerman:
    [PROSECUTOR:] Did you watch this video in preparation?

    [MR. ZIMMERMAN:] Yes, I did.

    [PROSECUTOR:] Okay. And after seeing that video[,] was that a true and
    accurate depiction of the events that occurred that day?

    [MR. ZIMMERMAN:] Yes.
    What was Mr. Zimmerman's response when asked if he watched the video in preparation?
    those guidelines still left a significant amount of ambiguity about “precisely what records [were]
    being requested.” Id. (internal quotation marks omitted). Notably, although the plaintiff limited
    the date range and number of reports requested, the plaintiff’s request would still place an
    unreasonable search burden for two primary reasons. First, the plaintiff’s guideline asking for
    What aspect of the plaintiff's request is mentioned as limited?
    motion without prejudice and permit him to do the same. See Prop. of the People, Inc., 330
    F. Supp. 3d at 390 (denying the parties’ motions without prejudice because the agency failed to
    submit sufficient information justifying its FOIA withholdings and permitting both parties to file
    renewed motions).
    Thus, it is hereby ORDERED that Defendant’s Motion for Summary Judgment, ECF
    What were the parties allowed to do after their motions were denied?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
1.0 6 - 0.5702 0.5637 0.5165 0.4642 0.3672
1.7033 10 107.719 - - - - -
2.0 12 - 0.6308 0.6204 0.5816 0.5030 0.3945
3.0 18 - 0.6403 0.6286 0.5892 0.5124 0.3973
3.3516 20 58.188 0.6406 0.6285 0.5906 0.5135 0.3979
1.0 6 - 0.6590 0.6518 0.6151 0.5451 0.4307
1.7033 10 49.076 - - - - -
2.0 12 - 0.6696 0.6602 0.6247 0.5612 0.4497
3.0 18 - 0.6719 0.6616 0.6311 0.5656 0.4577
3.3516 20 36.707 0.6719 0.6616 0.6311 0.5656 0.4577
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.7.0+cu126
  • Accelerate: 1.6.0
  • Datasets: 3.5.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
6
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PhilLel/modernbert-embed-base-legal-matryoshka-2

Finetuned
(63)
this model

Evaluation results