SPLADE Sparse Encoder
This is a SPLADE Sparse Encoder model finetuned from almanach/moderncamembert-cv2-base on the french_sts dataset using the sentence-transformers library. It maps sentences & paragraphs to a 32768-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: almanach/moderncamembert-cv2-base
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 32768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: fr
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertForMaskedLM'})
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 32768})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("bourdoiscatie/SPLADE_moderncamembert_STS")
# Run inference
sentences = [
"Oui, je peux vous dire d'après mon expérience personnelle qu'ils ont certainement sifflé.",
"Il est vrai que les bombes de la Seconde Guerre mondiale faisaient un bruit de sifflet lorsqu'elles tombaient.",
"J'envisage de dépenser les 48 dollars par mois pour le système GTD (Getting things done) annoncé par David Allen.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 32768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.2788, 0.1010],
# [0.2788, 1.0000, 0.0000],
# [0.1010, 0.0000, 1.0000]])
Evaluation
Metrics
Semantic Similarity
- Datasets:
sts-dev
andsts-test
- Evaluated with
SparseEmbeddingSimilarityEvaluator
Metric | sts-dev | sts-test |
---|---|---|
pearson_cosine | 0.6391 | 0.6476 |
spearman_cosine | 0.6356 | 0.5929 |
active_dims | 13.7024 | 17.2084 |
sparsity_ratio | 0.9996 | 0.9995 |
Training Details
Training Dataset
french_sts
- Dataset: french_sts at 47128cc
- Size: 12,227 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 6 tokens
- mean: 11.57 tokens
- max: 30 tokens
- min: 6 tokens
- mean: 11.62 tokens
- max: 35 tokens
- min: 0.0
- mean: 0.44
- max: 1.0
- Samples:
sentence1 sentence2 score Un avion est en train de décoller.
Un avion est en train de décoller.
1.0
Un homme est en train de fumer.
Un homme fait du patinage.
0.10000000149011612
Une personne jette un chat au plafond.
Une personne jette un chat au plafond.
1.0
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseCosineSimilarityLoss(loss_fct='torch.nn.modules.loss.MSELoss')", "document_regularizer_weight": 0.003 }
Evaluation Dataset
french_sts
- Dataset: french_sts at 47128cc
- Size: 3,526 evaluation samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 6 tokens
- mean: 18.54 tokens
- max: 48 tokens
- min: 6 tokens
- mean: 18.5 tokens
- max: 52 tokens
- min: 0.0
- mean: 0.43
- max: 1.0
- Samples:
sentence1 sentence2 score Un homme avec un casque de sécurité est en train de danser.
Un homme portant un casque de sécurité est en train de danser.
1.0
Un jeune enfant monte à cheval.
Un enfant monte à cheval.
0.949999988079071
Un homme donne une souris à un serpent.
L'homme donne une souris au serpent.
1.0
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseCosineSimilarityLoss(loss_fct='torch.nn.modules.loss.MSELoss')", "document_regularizer_weight": 0.003 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16bf16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
---|---|---|---|---|---|
-1 | -1 | - | - | 0.4346 | - |
0.1307 | 100 | 0.1768 | - | - | - |
0.2614 | 200 | 0.0464 | - | - | - |
0.3922 | 300 | 0.0421 | - | - | - |
0.5229 | 400 | 0.043 | - | - | - |
0.6536 | 500 | 0.0424 | - | - | - |
0.7843 | 600 | 0.0449 | - | - | - |
0.9150 | 700 | 0.0428 | - | - | - |
1.0 | 765 | - | 0.0636 | 0.5774 | - |
1.0458 | 800 | 0.0493 | - | - | - |
1.1765 | 900 | 0.0479 | - | - | - |
1.3072 | 1000 | 0.0435 | - | - | - |
1.4379 | 1100 | 0.0445 | - | - | - |
1.5686 | 1200 | 0.0365 | - | - | - |
1.6993 | 1300 | 0.0378 | - | - | - |
1.8301 | 1400 | 0.0411 | - | - | - |
1.9608 | 1500 | 0.0362 | - | - | - |
2.0 | 1530 | - | 0.0634 | 0.6332 | - |
2.0915 | 1600 | 0.0338 | - | - | - |
2.2222 | 1700 | 0.0302 | - | - | - |
2.3529 | 1800 | 0.0303 | - | - | - |
2.4837 | 1900 | 0.0295 | - | - | - |
2.6144 | 2000 | 0.027 | - | - | - |
2.7451 | 2100 | 0.0238 | - | - | - |
2.8758 | 2200 | 0.0244 | - | - | - |
3.0 | 2295 | - | 0.0617 | 0.6356 | - |
-1 | -1 | - | - | - | 0.5929 |
Framework Versions
- Python: 3.12.3
- Sentence Transformers: 5.0.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 2.16.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for bourdoiscatie/SPLADE_moderncamembert_STS
Base model
almanach/moderncamembert-cv2-baseDataset used to train bourdoiscatie/SPLADE_moderncamembert_STS
Evaluation results
- Pearson Cosine on sts devself-reported0.639
- Spearman Cosine on sts devself-reported0.636
- Active Dims on sts devself-reported13.702
- Sparsity Ratio on sts devself-reported1.000
- Pearson Cosine on sts testself-reported0.648
- Spearman Cosine on sts testself-reported0.593
- Active Dims on sts testself-reported17.208
- Sparsity Ratio on sts testself-reported0.999