SentenceTransformer based on manu/bge-m3-custom-fr

This is a sentence-transformers model finetuned from manu/bge-m3-custom-fr. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: manu/bge-m3-custom-fr
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Machine Learning Engineering for Production (MLOps)',
    "Understanding machine learning and deep learning concepts is essential, but if you’re looking to build an effective AI career, you need production engineering capabilities as well. \nEffectively deploying machine learning models requires competencies more commonly found in technical fields such as software engineering and DevOps. Machine learning engineering for production combines the foundational concepts of machine learning with the functional expertise of modern software development and engineering roles. \nThe Machine Learning Engineering for Production (MLOps) Specialization covers how to conceptualize, build, and maintain integrated systems that continuously operate in production. In striking contrast with standard machine learning modeling, production systems need to handle relentless evolving data. Moreover, the production system must run non-stop at the minimum cost while producing the maximum performance. In this Specialization, you will learn how to use well-established tools and methodologies for doing all of this effectively and efficiently.\nIn this Specialization, you will become familiar with the capabilities, challenges, and consequences of machine learning engineering in production. By the end, you will be ready to employ your new production-ready skills to participate in the development of leading-edge AI technology to solve real-world problems.\nApplied Learning Project\nBy the end, you'll be ready to\n• Design an ML production system end-to-end: project scoping, data needs, modeling strategies, and deployment requirements\n• Establish a model baseline, address concept drift, and prototype how to develop, deploy, and continuously improve a productionized ML application\n• Build data pipelines by gathering, cleaning, and validating datasets\n• Implement feature engineering, transformation, and selection with TensorFlow Extended\n• Establish data lifecycle by leveraging data lineage and provenance metadata tools and follow data evolution with enterprise data schemas\n• Apply techniques to manage modeling resources and best serve offline/online inference requests\n• Use analytics to address model fairness, explainability issues, and mitigate bottlenecks\n• Deliver deployment pipelines for model serving that require different infrastructures\n• Apply best practices and progressive delivery techniques to maintain a continuously operating production system\n Advanced",
    'Este certificado de cinco cursos, desenvolvido pelo Google, inclui um currículo inovador projetado para prepará-lo para uma função de nível básico em suporte de TI. Uma posição na área de TI pode ser um serviço de apoio pessoalmente ou remoto em uma pequena empresa ou em uma empresa global como o Google. Se você já lida com TI por algum tempo, ou é novo no campo, você veio ao lugar certo. O programa faz parte do Cresça com o Google, uma iniciativa do Google para ajudar a criar oportunidades econômicas.\nAtravés de uma mistura de palestras em vídeo, questionários e laboratórios e widgets práticos, o programa apresentará soluções de problemas e atendimento ao cliente, redes, sistemas operacionais, administração de sistemas e segurança. Ao longo do caminho, você aprenderá de Googlers com históricos exclusivos cuja base no suporte de TI serviu como um ponto de partida para suas carreiras.\nAo dedicar 5 horas por semana, você pode concluir o certificado em cerca de seis meses. Você pode pular o conteúdo que já sabe e fazer mais cedo os exames de avaliação.\nO conteúdo do Certificado Profissional de Suporte em TI do Google está sob a Licença Internacional de Atribuição 4.0 da Creative Commons.\n75% dos alunos que obtêm os Certificados do Google nos Estados Unidos relatam uma melhora em suas carreiras dentro de um intervalo de 6 meses após a obtenção da certificação.\nFonte: *baseado nas respostas de pesquisa com os graduados pelo programa, Estados Unidos, 2021\nApplied Learning Project\nEste certificado de cinco cursos, desenvolvido pelo Google, inclui um currículo inovador projetado para prepará-lo para uma função de nível básico em suporte de TI. Uma posição na área de TI pode ser um trabalho de serviço de apoio em pessoa ou remoto em uma pequena empresa ou em uma empresa global como o Google. Se você já lida com TI por algum tempo, ou é novo no campo, você veio ao lugar certo. O programa faz parte do Grow with Google, uma iniciativa do Google para ajudar a criar oportunidades econômicas.\n Beginner',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 999 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 999 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 3 tokens
    • mean: 9.4 tokens
    • max: 30 tokens
    • min: 8 tokens
    • mean: 290.27 tokens
    • max: 898 tokens
    • min: 1.0
    • mean: 1.0
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    AI Applications in Marketing and Finance In this course, you will learn about AI-powered applications that can enhance the customer journey and extend the customer lifecycle. You will learn how this AI-powered data can enable you to analyze consumer habits and maximize their potential to target your marketing to the right people. You will also learn about fraud, credit risks, and how AI applications can also help you combat the ever-challenging landscape of protecting consumer data. You will also learn methods to utilize supervised and unsupervised machine learning to enhance your fraud detection methods. You will also hear from leading industry experts in the world of data analytics, marketing, and fraud prevention. By the end of this course, you will have a substantial understanding of the role AI and Machine Learning play when it comes to consumer habits, and how we are able to interact and analyze information to increase deep learning potential for your business.
    Mixed
    1.0
    Business Strategies for A Better World In this Specialization, you’ll develop basic literacy in the language of business, which you can use to transition to a new career, start or improve your own small business, or apply to business school to continue your education. In five courses, you’ll learn the fundamentals of marketing, accounting, operations, and finance. In the final Capstone Project, you’ll apply the skills learned by developing a go-to-market strategy to address a real business challenge.
    Beginner
    1.0
    Financial Acumen for Non-Financial Managers In Finance for Technical Managers, you will explore the fundamental principles of financial management. Topics include understanding and interpreting a company’s financial statements, the time value of money and its role in evaluating the economic viability of different projects, and the annual capital budgeting process every company performs when selecting which projects to fund. In addition, you will cover some highly practical topics, such as how to determine product costs, establishing a department’s annual budget, and ways of forecasting future sales. As a side benefit, the quantitative skills you will learn for business are identical to the skills necessary to manage your own personal finances. Therefore, you will extend your analyses to cover investing of mutual funds composed of stocks and bonds, and you will explore the fascinating area of asset allocation.
    This specialization can be taken for academic credit as part of CU Boulder’s Master of Engineering in Engineering Managem...
    1.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • num_train_epochs: 2
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
2.0 500 0.013

Framework Versions

  • Python: 3.10.16
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.4.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
1
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mohitdeharkar/warp_fine_tuned_bge_m3

Base model

manu/bge-fr-en
Finetuned
(1)
this model