SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a sentence-transformers model specifically trained for job title matching and similarity. It's finetuned from sentence-transformers/paraphrase-multilingual-mpnet-base-v2 on a large dataset of job titles and their associated skills/requirements across multiple languages. The model maps English, Spanish, German and Chinese job titles and descriptions to a 1024-dimensional dense vector space and can be used for semantic job title matching, job similarity search, and related HR/recruitment tasks.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
- Maximum Sequence Length: 64 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset: 4 x 5.2M high-quality job title - skills pairs in English, Spanish, German and Chinese
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Asym(
(anchor-0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(positive-0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load and use the model with the following code:
import torch
import numpy as np
from tqdm.auto import tqdm
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import batch_to_device, cos_sim
# Load the model
model = SentenceTransformer("TechWolf/JobBERT-v3")
def encode_batch(jobbert_model, texts):
features = jobbert_model.tokenize(texts)
features = batch_to_device(features, jobbert_model.device)
features["text_keys"] = ["anchor"]
with torch.no_grad():
out_features = jobbert_model.forward(features)
return out_features["sentence_embedding"].cpu().numpy()
def encode(jobbert_model, texts, batch_size: int = 8):
# Sort texts by length and keep track of original indices
sorted_indices = np.argsort([len(text) for text in texts])
sorted_texts = [texts[i] for i in sorted_indices]
embeddings = []
# Encode in batches
for i in tqdm(range(0, len(sorted_texts), batch_size)):
batch = sorted_texts[i:i+batch_size]
embeddings.append(encode_batch(jobbert_model, batch))
# Concatenate embeddings and reorder to original indices
sorted_embeddings = np.concatenate(embeddings)
original_order = np.argsort(sorted_indices)
return sorted_embeddings[original_order]
# Example usage
job_titles = [
'Software Engineer',
'高级软件开发人员', # senior software developer
'Produktmanager', # product manager
'Científica de datos' # data scientist
]
# Get embeddings
embeddings = encode(model, job_titles)
# Calculate cosine similarity matrix
similarities = cos_sim(embeddings, embeddings)
print(similarities)
The output will be a similarity matrix where each value represents the cosine similarity between two job titles:
tensor([[1.0000, 0.8087, 0.4673, 0.5669],
[0.8087, 1.0000, 0.4428, 0.4968],
[0.4673, 0.4428, 1.0000, 0.4292],
[0.5669, 0.4968, 0.4292, 1.0000]])
Training Details
Training Dataset
Unnamed Dataset
- Size: 21,123,868 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 4 tokens
- mean: 10.56 tokens
- max: 38 tokens
- min: 19 tokens
- mean: 61.08 tokens
- max: 64 tokens
- Samples:
anchor positive 通信与培训专员
deliver online training, liaise with educational support staff, interact with an audience, construct individual learning plans, lead a team, develop corporate training programmes, learning technologies, communication, identify with the company's goals, address an audience, learning management systems, use presentation software, motivate others, provide learning support, engage with stakeholders, identify skills gaps, meet expectations of target audience, develop training programmes
Associate Infrastructure Engineer
create solutions to problems, design user interface, cloud technologies, use databases, automate cloud tasks, keep up-to-date to computer trends, work in teams, use object-oriented programming, keep updated on innovations in various business fields, design principles, Angular, adapt to changing situations, JavaScript, Agile development, manage stable, Swift (computer programming), keep up-to-date to design industry trends, monitor technology trends, web programming, provide mentorship, advise on efficiency improvements, adapt to change, JavaScript Framework, database management systems, stimulate creative processes
客户顾问/出纳
customer service, handle financial transactions, adapt to changing situations, have computer literacy, manage cash desk, attend to detail, provide customer guidance on product selection, perform multiple tasks at the same time, carry out financial transactions, provide membership service, manage accounts, adapt to change, identify customer's needs, solve problems
- Loss:
CachedMultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "mini_batch_size": 512 }
Training Hyperparameters
Non-Default Hyperparameters
overwrite_output_dir
: Trueper_device_train_batch_size
: 2048per_device_eval_batch_size
: 2048num_train_epochs
: 1fp16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Truedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 2048per_device_eval_batch_size
: 2048per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.0485 | 500 | 3.89 |
0.0969 | 1000 | 3.373 |
0.1454 | 1500 | 3.1715 |
0.1939 | 2000 | 3.0414 |
0.2424 | 2500 | 2.9462 |
0.2908 | 3000 | 2.8691 |
0.3393 | 3500 | 2.8048 |
0.3878 | 4000 | 2.7501 |
0.4363 | 4500 | 2.7026 |
0.4847 | 5000 | 2.6601 |
0.5332 | 5500 | 2.6247 |
0.5817 | 6000 | 2.5951 |
0.6302 | 6500 | 2.5692 |
0.6786 | 7000 | 2.5447 |
0.7271 | 7500 | 2.5221 |
0.7756 | 8000 | 2.5026 |
0.8240 | 8500 | 2.4912 |
0.8725 | 9000 | 2.4732 |
0.9210 | 9500 | 2.4608 |
0.9695 | 10000 | 2.4548 |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 1.944 kWh
- Carbon Emitted: 0.717 kg of CO2
- Hours Used: 5.34 hours
Training Hardware
- On Cloud: Yes
- GPU Model: 1 x NVIDIA A100-SXM4-40GB
- CPU Model: Intel(R) Xeon(R) CPU @ 2.20GHz
- RAM Size: 83.48 GB
Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.1.0
- Transformers: 4.48.3
- PyTorch: 2.6.0+cu126
- Accelerate: 1.3.0
- Datasets: 3.5.1
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CachedMultipleNegativesRankingLoss
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
- Downloads last month
- 0