SentenceTransformer based on nomic-ai/nomic-embed-text-v2-moe
This is a sentence-transformers model finetuned from nomic-ai/nomic-embed-text-v2-moe on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: nomic-ai/nomic-embed-text-v2-moe
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Demircan12/nomic-embed-text-v2-moe-YeditepeFT")
# Run inference
sentences = [
'CSE 447 (Ozkaya) sınavında kopya çekme girişimi nasıl değerlendirilir?',
'Exam Cheating Policy: Any attempt at cheating during the midterm and final exams will be treated seriously.',
'Week-10 AVL tree\nWeek-11 IPR tree\nWeek-12 B tree\nWeek-13 B+ tree',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
json
- Dataset: json
- Size: 1,535 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 11 tokens
- mean: 22.23 tokens
- max: 43 tokens
- min: 6 tokens
- mean: 38.09 tokens
- max: 247 tokens
- Samples:
anchor positive Are the Fall (Regular) and Spring (Irregular) programs in the Faculty of Law different according to MADDE 5?
Güz (Regular) ve Bahar (Irregular) programları başlangıç zamanı dışında her açıdan birbirine denktir.
According to MADDE 6, who can take the Postgraduate Proficiency Exam?
(2) Yeterlik Sınavı’na, yeni kayıtlı öğrencilerle birlikte halen hazırlık programında öğrenimine devam eden öğrenciler de girebilirler.
What is the purpose of the Horizontal/Vertical Transfer Adaptation Principles (Madde 1)?
Madde 1- (1) Yatay/Dikey Geçiş İntibak Esaslarının amacı, Yeditepe Üniversitesine yatay geçiş veya dikey geçiş ile kabul edilen öğrencilerin intibak işlemlerine ilişkin esas ve usulleri belirlemektir.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
json
- Dataset: json
- Size: 220 evaluation samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 220 samples:
anchor positive type string string details - min: 12 tokens
- mean: 21.82 tokens
- max: 39 tokens
- min: 9 tokens
- mean: 36.3 tokens
- max: 160 tokens
- Samples:
anchor positive CSE 439 dersinin notlandırma dağılımı nasıldır?
Grading Breakdown: Midterm: 30%\nFinal: 35%\nHomeworks and Quizzes: 15%\nTerm Project: 20%
Hukuk Fakültesi Yönetmeliği Madde 13'e göre dersler öğrencilerin hangi yeteneklerini geliştirmeye yönelik yürütülür?
(3) Dersler öğrencilerin muhakeme ve sözlü-yazılı anlatım yeteneklerinin geliştirilmesine katkı sağlayacak şekilde yürütülür.
Yönetmelik Madde 18'e göre hangi sınav Yönetmeliğine göre başarılı olanlar dil sınavından muaf tutulur?
b) Öğretim dilinin anadil olarak konuşulduğu ülkelerde yabancıların yükseköğrenim görebilmeleri için aranan asgari yabancı dil seviyesinin tespiti amacına yönelik olarak yapılan sınavlarda ve 25/9/2013 tarihli ve 28776 sayılı Resmî Gazete’de yayımlanan Yeditepe Üniversitesi Yabancı Diller Hazırlık Programı Eğitim-Öğretim ve Sınav Yönetmeliği hükümlerine göre başarılı olanlar.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 5warmup_ratio
: 0.1batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
1.0417 | 100 | 0.1199 | 0.0659 |
2.0833 | 200 | 0.0236 | 0.0524 |
3.125 | 300 | 0.0145 | 0.0578 |
4.1667 | 400 | 0.0102 | 0.0617 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 245
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Demircan12/nomic-embed-text-v2-moe-YeditepeFT
Base model
FacebookAI/xlm-roberta-base
Finetuned
nomic-ai/nomic-xlm-2048
Finetuned
nomic-ai/nomic-embed-text-v2-moe