metadata
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:10000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: naver/splade-cocondenser-ensembledistil
widget:
- text: Two kids at a ballgame wash their hands.
- text: Two dogs near a lake, while a person rides by on a horse.
- text: >-
This mother and her daughter and granddaughter are having car trouble, and
the poor little girl looks hot out in the heat.
- text: A young man competes in the Olympics in the pole vaulting competition.
- text: A man is playing with the brass pots
datasets:
- sentence-transformers/all-nli
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- active_dims
- sparsity_ratio
co2_eq_emissions:
emissions: 2.9668555526185707
energy_consumed: 0.007632725204960537
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.033
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: >-
splade-cocondenser-ensembledistil trained on Natural Language Inference
(NLI)
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.8541311579868741
name: Pearson Cosine
- type: spearman_cosine
value: 0.8470008029984434
name: Spearman Cosine
- type: active_dims
value: 99.30233383178711
name: Active Dims
- type: sparsity_ratio
value: 0.9967465325394211
name: Sparsity Ratio
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8223074543214202
name: Pearson Cosine
- type: spearman_cosine
value: 0.8065254878130631
name: Spearman Cosine
- type: active_dims
value: 95.75453186035156
name: Active Dims
- type: sparsity_ratio
value: 0.9968627700720676
name: Sparsity Ratio
splade-cocondenser-ensembledistil trained on Natural Language Inference (NLI)
This is a SPLADE Sparse Encoder model finetuned from naver/splade-cocondenser-ensembledistil on the all-nli dataset using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: naver/splade-cocondenser-ensembledistil
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-ensembledistil-nli")
# Run inference
sentences = [
'A man is sitting in on the side of the street with brass pots.',
'A man is playing with the brass pots',
'A group of adults are swimming at the beach.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[16.8617, 12.9505, 0.2749],
# [12.9505, 20.8479, 0.2440],
# [ 0.2749, 0.2440, 18.7043]])
Evaluation
Metrics
Semantic Similarity
- Datasets:
sts-dev
andsts-test
- Evaluated with
SparseEmbeddingSimilarityEvaluator
Metric | sts-dev | sts-test |
---|---|---|
pearson_cosine | 0.8541 | 0.8223 |
spearman_cosine | 0.847 | 0.8065 |
active_dims | 99.3023 | 95.7545 |
sparsity_ratio | 0.9967 | 0.9969 |
Training Details
Training Dataset
all-nli
- Dataset: all-nli at d482672
- Size: 10,000 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 6 tokens
- mean: 17.38 tokens
- max: 52 tokens
- min: 4 tokens
- mean: 10.7 tokens
- max: 31 tokens
- min: 0.0
- mean: 0.5
- max: 1.0
- Samples:
sentence1 sentence2 score A person on a horse jumps over a broken down airplane.
A person is training his horse for a competition.
0.5
A person on a horse jumps over a broken down airplane.
A person is at a diner, ordering an omelette.
0.0
A person on a horse jumps over a broken down airplane.
A person is outdoors, on a horse.
1.0
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')", "lambda_corpus": 0.003 }
Evaluation Dataset
all-nli
- Dataset: all-nli at d482672
- Size: 1,000 evaluation samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 6 tokens
- mean: 18.44 tokens
- max: 57 tokens
- min: 5 tokens
- mean: 10.57 tokens
- max: 25 tokens
- min: 0.0
- mean: 0.5
- max: 1.0
- Samples:
sentence1 sentence2 score Two women are embracing while holding to go packages.
The sisters are hugging goodbye while holding to go packages after just eating lunch.
0.5
Two women are embracing while holding to go packages.
Two woman are holding packages.
1.0
Two women are embracing while holding to go packages.
The men are fighting outside a deli.
0.0
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')", "lambda_corpus": 0.003 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 4e-06num_train_epochs
: 1bf16
: Trueload_best_model_at_end
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 4e-06weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
---|---|---|---|---|---|
-1 | -1 | - | - | 0.8366 | - |
0.032 | 20 | 0.8107 | - | - | - |
0.064 | 40 | 0.7854 | - | - | - |
0.096 | 60 | 0.7015 | - | - | - |
0.128 | 80 | 0.7161 | - | - | - |
0.16 | 100 | 0.724 | - | - | - |
0.192 | 120 | 0.6883 | 0.7255 | 0.8454 | - |
0.224 | 140 | 0.6661 | - | - | - |
0.256 | 160 | 0.6786 | - | - | - |
0.288 | 180 | 0.679 | - | - | - |
0.32 | 200 | 0.8013 | - | - | - |
0.352 | 220 | 0.6781 | - | - | - |
0.384 | 240 | 0.667 | 0.6779 | 0.8465 | - |
0.416 | 260 | 0.6691 | - | - | - |
0.448 | 280 | 0.7376 | - | - | - |
0.48 | 300 | 0.5601 | - | - | - |
0.512 | 320 | 0.6425 | - | - | - |
0.544 | 340 | 0.7406 | - | - | - |
0.576 | 360 | 0.6033 | 0.6623 | 0.8469 | - |
0.608 | 380 | 0.8166 | - | - | - |
0.64 | 400 | 0.5303 | - | - | - |
0.672 | 420 | 0.614 | - | - | - |
0.704 | 440 | 0.6253 | - | - | - |
0.736 | 460 | 0.5467 | - | - | - |
0.768 | 480 | 0.6804 | 0.6531 | 0.8470 | - |
0.8 | 500 | 0.6765 | - | - | - |
0.832 | 520 | 0.6522 | - | - | - |
0.864 | 540 | 0.5845 | - | - | - |
0.896 | 560 | 0.6786 | - | - | - |
0.928 | 580 | 0.5232 | - | - | - |
0.96 | 600 | 0.6077 | 0.6516 | 0.847 | - |
0.992 | 620 | 0.619 | - | - | - |
-1 | -1 | - | - | - | 0.8065 |
- The bold row denotes the saved checkpoint.
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 0.008 kWh
- Carbon Emitted: 0.003 kg of CO2
- Hours Used: 0.033 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA GeForce RTX 3090
- CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
- RAM Size: 31.78 GB
Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}