ModernBERT Embed base Legal Matryoshka

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("TharushiDinushika/modernbert-embed-base-legal-matryoshka-2")
# Run inference
sentences = [
    'The CIA appears to recognize the breadth of its proposed interpretation in this regard, \ncontending in multiple places that “it is not clear that there is any practical difference between \nthe organization and functions of CIA personnel and those of the Agency” since “the CIA is \ncomposed of and acts entirely through its employees.”  See Def.’s First 443 Reply at 9; see also',
    'What does the CIA contend about the difference between the organization and functions of its personnel and those of the Agency?',
    'What does 5 C.F.R. § 340.403(a) require regarding work schedule changes?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5688
cosine_accuracy@3 0.609
cosine_accuracy@5 0.694
cosine_accuracy@10 0.7635
cosine_precision@1 0.5688
cosine_precision@3 0.5353
cosine_precision@5 0.4117
cosine_precision@10 0.2423
cosine_recall@1 0.2027
cosine_recall@3 0.5209
cosine_recall@5 0.6452
cosine_recall@10 0.7558
cosine_ndcg@10 0.6689
cosine_mrr@10 0.6136
cosine_map@100 0.6535

Information Retrieval

Metric Value
cosine_accuracy@1 0.5611
cosine_accuracy@3 0.6074
cosine_accuracy@5 0.6847
cosine_accuracy@10 0.7512
cosine_precision@1 0.5611
cosine_precision@3 0.5307
cosine_precision@5 0.4087
cosine_precision@10 0.238
cosine_recall@1 0.199
cosine_recall@3 0.514
cosine_recall@5 0.6403
cosine_recall@10 0.7421
cosine_ndcg@10 0.6583
cosine_mrr@10 0.6053
cosine_map@100 0.6441

Information Retrieval

Metric Value
cosine_accuracy@1 0.4977
cosine_accuracy@3 0.5641
cosine_accuracy@5 0.6507
cosine_accuracy@10 0.711
cosine_precision@1 0.4977
cosine_precision@3 0.4822
cosine_precision@5 0.3839
cosine_precision@10 0.2249
cosine_recall@1 0.1753
cosine_recall@3 0.4656
cosine_recall@5 0.5996
cosine_recall@10 0.6996
cosine_ndcg@10 0.6086
cosine_mrr@10 0.5505
cosine_map@100 0.5948

Information Retrieval

Metric Value
cosine_accuracy@1 0.4436
cosine_accuracy@3 0.4853
cosine_accuracy@5 0.5842
cosine_accuracy@10 0.6677
cosine_precision@1 0.4436
cosine_precision@3 0.4209
cosine_precision@5 0.3366
cosine_precision@10 0.2114
cosine_recall@1 0.1574
cosine_recall@3 0.4083
cosine_recall@5 0.5242
cosine_recall@10 0.6562
cosine_ndcg@10 0.5561
cosine_mrr@10 0.4933
cosine_map@100 0.5379

Information Retrieval

Metric Value
cosine_accuracy@1 0.3338
cosine_accuracy@3 0.3756
cosine_accuracy@5 0.4699
cosine_accuracy@10 0.5595
cosine_precision@1 0.3338
cosine_precision@3 0.3205
cosine_precision@5 0.2634
cosine_precision@10 0.1764
cosine_recall@1 0.1209
cosine_recall@3 0.3135
cosine_recall@5 0.414
cosine_recall@10 0.5399
cosine_ndcg@10 0.4453
cosine_mrr@10 0.3835
cosine_map@100 0.4345

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 5,822 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 7 tokens
    • mean: 16.7 tokens
    • max: 46 tokens
    • min: 26 tokens
    • mean: 97.29 tokens
    • max: 156 tokens
  • Samples:
    anchor positive
    What is a typical and appropriate method for deciding FOIA cases? some other failure to abide by the terms of the FOIA, and not merely isolated mistakes by agency
    officials.” Payne, 837 F.2d at 491.
    B.
    Summary Judgment
    “‘FOIA cases typically and appropriately are decided on motions for summary
    judgment.’” Georgacarakos v. FBI, 908 F. Supp. 2d 176, 180 (D.D.C. 2012) (quoting Defenders
    Who had the burden to authenticate the video? fairly and accurately depicted the shooting.15 And, although the burden was on the State
    to authenticate the video, it is worth observing that, while Mr. Mooney’s counsel argued in
    the circuit court that “there’s no way to know if that video’s been altered[,]” Mr. Mooney
    did not allege that the video was altered or tampered with.
    ¿Qué no logró probar Salgueiro? Salgueiro no logró probar su causa de acción. Si bien es cierto que,
    la parte apelante no logró persuadirnos de que el trabajo audiovisual
    realizado por el señor Friger Salgueiro –incluyendo aquel en que
    aparecía su propia imagen– fuera hecho por encargo, la parte
    apelada tampoco logró establecer mediante prueba a esos efectos,
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.8791 10 49.1647 - - - - -
1.0 12 - 0.6588 0.6507 0.6034 0.5468 0.4348
1.7033 20 30.5671 - - - - -
1.8791 22 - 0.6689 0.6583 0.6086 0.5561 0.4453
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.7.0+cu126
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
6
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TharushiDinushika/modernbert-embed-base-legal-matryoshka-2

Finetuned
(63)
this model

Evaluation results