greatakela commited on
Commit
e17e45c
·
verified ·
1 Parent(s): 338902b

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:4893
8
+ - loss:TripletLoss
9
+ base_model: microsoft/mpnet-base
10
+ widget:
11
+ - source_sentence: I almost envy you your assignment. I see in your mind that you
12
+ are tempted to take my place. Not correct, Doctor, although I am aware of your
13
+ mind attempting to contact mine. Were you born a telepath? Yes. That is why I
14
+ had to study on Vulcan. I understand. May I show you to your quarters?[SEP]I think
15
+ I'll stay here a bit. Ambassador Kollos often finds the process of transport somewhat
16
+ unsettling.
17
+ sentences:
18
+ - ' I don''t see anything, do you?'
19
+ - I understand. Our ship's surgeon often makes the same complaint. Do call when
20
+ you are ready.
21
+ - Irrelevant, since we are here.
22
+ - source_sentence: Aye, sir. Aye, sir. Return to your station, Sub-commander. The
23
+ boarding action on the Enterprise will begin with my command. If they resist,
24
+ destroy her. Execution of state criminals is both painful and unpleasant. I believe
25
+ the details are unnecessary. The sentence will be carried out immediately after
26
+ the charges have been recorded. I demand the Right of Statement first.[SEP]You
27
+ understand Romulan tradition well. The right is granted.
28
+ sentences:
29
+ - ' No, I... it''s just... you''re just coming off the surgery and you''re not yourself
30
+ yet and I work for you and Even though last year''s... [Frustrated sigh as House
31
+ starts smiling smugly.] you''re smiling! I''m saying no and you''re smiling!'
32
+ - I wish to ask a question. What of Sarek's family, his wife and son?
33
+ - Thank you. I shall not require much time. No more than twenty minutes, I should
34
+ say.
35
+ - source_sentence: Mankind, ready to kill. That's the way it was in 1881. I wonder
36
+ how humanity managed to survive. We overcame our instinct for violence. Some desk-bound
37
+ Starfleet bureaucrat cut these cloak-and-dagger orders.[SEP]Aye, but why the secrecy?
38
+ This star system's under Federation control.
39
+ sentences:
40
+ - It's in a border area, Mister Scott. The Klingons also claim jurisdiction.
41
+ - ' That''s your argument? Better outcome?'
42
+ - Are you all right, Captain?
43
+ - source_sentence: We're trying to help you, Oxmyx. Nobody helps nobody but himself.
44
+ Sir, you are employing a double negative. Huh? I fail to see why you do not understand
45
+ us. You yourself have stated the need for unity of authority on this planet. We
46
+ agree.[SEP]Yeah, but I got to be the unity.
47
+ sentences:
48
+ - Co-operation, sir, would inevitably result
49
+ - Quite right, Mister Scott. There's somebody holding us down. All systems are go,
50
+ but we're not moving.
51
+ - ' So today I''m jailbait but in 22 weeks anybody can do anything to me. Will I
52
+ be so different in 22 weeks?'
53
+ - source_sentence: What happened? Where have I been? Right here, it seems. But that
54
+ girl. She was so beautiful. So real. Do you remember anything else? No.[SEP]Good.
55
+ Perhaps that explains why he's here. Nothing was real to him except the girl.
56
+ sentences:
57
+ - Sweeping the area of Outpost two. Sensor reading indefinite. Double-checking Outpost
58
+ three. I read dust and debris. Both Earth outposts gone, and the asteroids they
59
+ were constructed on, pulverised.
60
+ - ' It''s killing you.'
61
+ - Captain, the Melkotian object.
62
+ pipeline_tag: sentence-similarity
63
+ library_name: sentence-transformers
64
+ metrics:
65
+ - cosine_accuracy
66
+ model-index:
67
+ - name: SentenceTransformer based on microsoft/mpnet-base
68
+ results:
69
+ - task:
70
+ type: triplet
71
+ name: Triplet
72
+ dataset:
73
+ name: evaluator enc
74
+ type: evaluator_enc
75
+ metrics:
76
+ - type: cosine_accuracy
77
+ value: 0.9989781379699707
78
+ name: Cosine Accuracy
79
+ - task:
80
+ type: triplet
81
+ name: Triplet
82
+ dataset:
83
+ name: evaluator val
84
+ type: evaluator_val
85
+ metrics:
86
+ - type: cosine_accuracy
87
+ value: 0.9930555820465088
88
+ name: Cosine Accuracy
89
+ ---
90
+
91
+ # SentenceTransformer based on microsoft/mpnet-base
92
+
93
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
94
+
95
+ ## Model Details
96
+
97
+ ### Model Description
98
+ - **Model Type:** Sentence Transformer
99
+ - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
100
+ - **Maximum Sequence Length:** 256 tokens
101
+ - **Output Dimensionality:** 768 dimensions
102
+ - **Similarity Function:** Cosine Similarity
103
+ <!-- - **Training Dataset:** Unknown -->
104
+ <!-- - **Language:** Unknown -->
105
+ <!-- - **License:** Unknown -->
106
+
107
+ ### Model Sources
108
+
109
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
110
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
111
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
112
+
113
+ ### Full Model Architecture
114
+
115
+ ```
116
+ SentenceTransformer(
117
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: MPNetModel
118
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
119
+ )
120
+ ```
121
+
122
+ ## Usage
123
+
124
+ ### Direct Usage (Sentence Transformers)
125
+
126
+ First install the Sentence Transformers library:
127
+
128
+ ```bash
129
+ pip install -U sentence-transformers
130
+ ```
131
+
132
+ Then you can load this model and run inference.
133
+ ```python
134
+ from sentence_transformers import SentenceTransformer
135
+
136
+ # Download from the 🤗 Hub
137
+ model = SentenceTransformer("greatakela/gennlp_hw1_encoder2025")
138
+ # Run inference
139
+ sentences = [
140
+ "What happened? Where have I been? Right here, it seems. But that girl. She was so beautiful. So real. Do you remember anything else? No.[SEP]Good. Perhaps that explains why he's here. Nothing was real to him except the girl.",
141
+ 'Captain, the Melkotian object.',
142
+ " It's killing you.",
143
+ ]
144
+ embeddings = model.encode(sentences)
145
+ print(embeddings.shape)
146
+ # [3, 768]
147
+
148
+ # Get the similarity scores for the embeddings
149
+ similarities = model.similarity(embeddings, embeddings)
150
+ print(similarities.shape)
151
+ # [3, 3]
152
+ ```
153
+
154
+ <!--
155
+ ### Direct Usage (Transformers)
156
+
157
+ <details><summary>Click to see the direct usage in Transformers</summary>
158
+
159
+ </details>
160
+ -->
161
+
162
+ <!--
163
+ ### Downstream Usage (Sentence Transformers)
164
+
165
+ You can finetune this model on your own dataset.
166
+
167
+ <details><summary>Click to expand</summary>
168
+
169
+ </details>
170
+ -->
171
+
172
+ <!--
173
+ ### Out-of-Scope Use
174
+
175
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
176
+ -->
177
+
178
+ ## Evaluation
179
+
180
+ ### Metrics
181
+
182
+ #### Triplet
183
+
184
+ * Datasets: `evaluator_enc` and `evaluator_val`
185
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
186
+
187
+ | Metric | evaluator_enc | evaluator_val |
188
+ |:--------------------|:--------------|:--------------|
189
+ | **cosine_accuracy** | **0.999** | **0.9931** |
190
+
191
+ <!--
192
+ ## Bias, Risks and Limitations
193
+
194
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
195
+ -->
196
+
197
+ <!--
198
+ ### Recommendations
199
+
200
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
201
+ -->
202
+
203
+ ## Training Details
204
+
205
+ ### Training Dataset
206
+
207
+ #### Unnamed Dataset
208
+
209
+ * Size: 4,893 training samples
210
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
211
+ * Approximate statistics based on the first 1000 samples:
212
+ | | sentence_0 | sentence_1 | sentence_2 |
213
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
214
+ | type | string | string | string |
215
+ | details | <ul><li>min: 2 tokens</li><li>mean: 90.47 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.64 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.14 tokens</li><li>max: 199 tokens</li></ul> |
216
+ * Samples:
217
+ | sentence_0 | sentence_1 | sentence_2 |
218
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
219
+ | <code>Oh, well, if that's all. Mister Scott, transport the glommer over to the Klingon ship. Aye, sir. You can't do this to me. Under space salvage laws, he's mine. A planetary surface is not covered by space salvage laws. But if you want the little beastie that bad, Mister Jones, we'll transport you over with it. I withdraw my claim.[SEP]Well, at least we can report the stasis field is not as effective a weapon as we thought. The power drain is too high and takes too long for the Klingon ship to recover to make it practical.</code> | <code>Agreed, Captain. Tribbles appear to be a much more effective weapon.</code> | <code> [protesting] I give him...</code> |
220
+ | <code>Do you mean that's what the Kelvans really are? Undoubtedly. Well, if they look that way normally, why did they adapt themselves to our bodies? Perhaps practicality. They chose the Enterprise as the best vessel for the trip. Immense beings with a hundred tentacles would have difficulty with the turbolift. We've got to stop them. We outnumber them. Their only hold on us is the paralysis field. Well, that's enough. One wrong move, and they jam all our neural circuits.[SEP]Jam. Spock, if you reverse the circuits on McCoy's neuro-analyser, can you set up a counter field to jam the paralysis projector?</code> | <code>I'm dubious of the possibilities of success, Captain. The medical equipment is not designed to put out a great deal of power. The polarized elements would burn out quickly.</code> | <code> The next step would be a type of brain surgery.</code> |
221
+ | <code>Well, speculation isn't much help. We have to get in there. Perhaps there is a way open on the far side. There is much less activity there. That building in the centre. It seems to be important. You stand before the Ruling Tribunal of the Aquans. I am Domar, the High Tribune. I'm Captain Kirk of the starship Enterprise. This is my first officer, Mister Spock.[SEP]You are air-breather enemies from the surface. We have expected spies for a long time.</code> | <code>We came here in peace, Tribune.</code> | <code> Which is why we need to look at the nerve that you didn't biopsy.</code> |
222
+ * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
223
+ ```json
224
+ {
225
+ "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
226
+ "triplet_margin": 5
227
+ }
228
+ ```
229
+
230
+ ### Training Hyperparameters
231
+ #### Non-Default Hyperparameters
232
+
233
+ - `eval_strategy`: steps
234
+ - `multi_dataset_batch_sampler`: round_robin
235
+
236
+ #### All Hyperparameters
237
+ <details><summary>Click to expand</summary>
238
+
239
+ - `overwrite_output_dir`: False
240
+ - `do_predict`: False
241
+ - `eval_strategy`: steps
242
+ - `prediction_loss_only`: True
243
+ - `per_device_train_batch_size`: 8
244
+ - `per_device_eval_batch_size`: 8
245
+ - `per_gpu_train_batch_size`: None
246
+ - `per_gpu_eval_batch_size`: None
247
+ - `gradient_accumulation_steps`: 1
248
+ - `eval_accumulation_steps`: None
249
+ - `torch_empty_cache_steps`: None
250
+ - `learning_rate`: 5e-05
251
+ - `weight_decay`: 0.0
252
+ - `adam_beta1`: 0.9
253
+ - `adam_beta2`: 0.999
254
+ - `adam_epsilon`: 1e-08
255
+ - `max_grad_norm`: 1
256
+ - `num_train_epochs`: 3
257
+ - `max_steps`: -1
258
+ - `lr_scheduler_type`: linear
259
+ - `lr_scheduler_kwargs`: {}
260
+ - `warmup_ratio`: 0.0
261
+ - `warmup_steps`: 0
262
+ - `log_level`: passive
263
+ - `log_level_replica`: warning
264
+ - `log_on_each_node`: True
265
+ - `logging_nan_inf_filter`: True
266
+ - `save_safetensors`: True
267
+ - `save_on_each_node`: False
268
+ - `save_only_model`: False
269
+ - `restore_callback_states_from_checkpoint`: False
270
+ - `no_cuda`: False
271
+ - `use_cpu`: False
272
+ - `use_mps_device`: False
273
+ - `seed`: 42
274
+ - `data_seed`: None
275
+ - `jit_mode_eval`: False
276
+ - `use_ipex`: False
277
+ - `bf16`: False
278
+ - `fp16`: False
279
+ - `fp16_opt_level`: O1
280
+ - `half_precision_backend`: auto
281
+ - `bf16_full_eval`: False
282
+ - `fp16_full_eval`: False
283
+ - `tf32`: None
284
+ - `local_rank`: 0
285
+ - `ddp_backend`: None
286
+ - `tpu_num_cores`: None
287
+ - `tpu_metrics_debug`: False
288
+ - `debug`: []
289
+ - `dataloader_drop_last`: False
290
+ - `dataloader_num_workers`: 0
291
+ - `dataloader_prefetch_factor`: None
292
+ - `past_index`: -1
293
+ - `disable_tqdm`: False
294
+ - `remove_unused_columns`: True
295
+ - `label_names`: None
296
+ - `load_best_model_at_end`: False
297
+ - `ignore_data_skip`: False
298
+ - `fsdp`: []
299
+ - `fsdp_min_num_params`: 0
300
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
301
+ - `fsdp_transformer_layer_cls_to_wrap`: None
302
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
303
+ - `deepspeed`: None
304
+ - `label_smoothing_factor`: 0.0
305
+ - `optim`: adamw_torch
306
+ - `optim_args`: None
307
+ - `adafactor`: False
308
+ - `group_by_length`: False
309
+ - `length_column_name`: length
310
+ - `ddp_find_unused_parameters`: None
311
+ - `ddp_bucket_cap_mb`: None
312
+ - `ddp_broadcast_buffers`: False
313
+ - `dataloader_pin_memory`: True
314
+ - `dataloader_persistent_workers`: False
315
+ - `skip_memory_metrics`: True
316
+ - `use_legacy_prediction_loop`: False
317
+ - `push_to_hub`: False
318
+ - `resume_from_checkpoint`: None
319
+ - `hub_model_id`: None
320
+ - `hub_strategy`: every_save
321
+ - `hub_private_repo`: None
322
+ - `hub_always_push`: False
323
+ - `gradient_checkpointing`: False
324
+ - `gradient_checkpointing_kwargs`: None
325
+ - `include_inputs_for_metrics`: False
326
+ - `include_for_metrics`: []
327
+ - `eval_do_concat_batches`: True
328
+ - `fp16_backend`: auto
329
+ - `push_to_hub_model_id`: None
330
+ - `push_to_hub_organization`: None
331
+ - `mp_parameters`:
332
+ - `auto_find_batch_size`: False
333
+ - `full_determinism`: False
334
+ - `torchdynamo`: None
335
+ - `ray_scope`: last
336
+ - `ddp_timeout`: 1800
337
+ - `torch_compile`: False
338
+ - `torch_compile_backend`: None
339
+ - `torch_compile_mode`: None
340
+ - `dispatch_batches`: None
341
+ - `split_batches`: None
342
+ - `include_tokens_per_second`: False
343
+ - `include_num_input_tokens_seen`: False
344
+ - `neftune_noise_alpha`: None
345
+ - `optim_target_modules`: None
346
+ - `batch_eval_metrics`: False
347
+ - `eval_on_start`: False
348
+ - `use_liger_kernel`: False
349
+ - `eval_use_gather_object`: False
350
+ - `average_tokens_across_devices`: False
351
+ - `prompts`: None
352
+ - `batch_sampler`: batch_sampler
353
+ - `multi_dataset_batch_sampler`: round_robin
354
+
355
+ </details>
356
+
357
+ ### Training Logs
358
+ | Epoch | Step | Training Loss | evaluator_enc_cosine_accuracy | evaluator_val_cosine_accuracy |
359
+ |:------:|:----:|:-------------:|:-----------------------------:|:-----------------------------:|
360
+ | -1 | -1 | - | 0.5494 | - |
361
+ | 0.4902 | 300 | - | 0.9808 | - |
362
+ | 0.8170 | 500 | 1.4249 | - | - |
363
+ | 0.9804 | 600 | - | 0.9912 | - |
364
+ | 1.0 | 612 | - | 0.9931 | - |
365
+ | 1.4706 | 900 | - | 0.9963 | - |
366
+ | 1.6340 | 1000 | 0.2269 | - | - |
367
+ | 1.9608 | 1200 | - | 0.9990 | - |
368
+ | 2.0 | 1224 | - | 0.9990 | - |
369
+ | 2.4510 | 1500 | 0.1054 | 0.9990 | - |
370
+ | 2.9412 | 1800 | - | 0.9990 | - |
371
+ | 3.0 | 1836 | - | 0.9990 | - |
372
+ | -1 | -1 | - | - | 0.9931 |
373
+
374
+
375
+ ### Framework Versions
376
+ - Python: 3.11.11
377
+ - Sentence Transformers: 3.4.1
378
+ - Transformers: 4.48.2
379
+ - PyTorch: 2.5.1+cu124
380
+ - Accelerate: 1.3.0
381
+ - Datasets: 3.2.0
382
+ - Tokenizers: 0.21.0
383
+
384
+ ## Citation
385
+
386
+ ### BibTeX
387
+
388
+ #### Sentence Transformers
389
+ ```bibtex
390
+ @inproceedings{reimers-2019-sentence-bert,
391
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
392
+ author = "Reimers, Nils and Gurevych, Iryna",
393
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
394
+ month = "11",
395
+ year = "2019",
396
+ publisher = "Association for Computational Linguistics",
397
+ url = "https://arxiv.org/abs/1908.10084",
398
+ }
399
+ ```
400
+
401
+ #### TripletLoss
402
+ ```bibtex
403
+ @misc{hermans2017defense,
404
+ title={In Defense of the Triplet Loss for Person Re-Identification},
405
+ author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
406
+ year={2017},
407
+ eprint={1703.07737},
408
+ archivePrefix={arXiv},
409
+ primaryClass={cs.CV}
410
+ }
411
+ ```
412
+
413
+ <!--
414
+ ## Glossary
415
+
416
+ *Clearly define terms in order to be accessible across audiences.*
417
+ -->
418
+
419
+ <!--
420
+ ## Model Card Authors
421
+
422
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
423
+ -->
424
+
425
+ <!--
426
+ ## Model Card Contact
427
+
428
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
429
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/mpnet-base",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.48.2",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.2",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6debb3164ba4312a4a69728de45a92677dc1c18d79a2ed02540ba2081c4ca442
3
+ size 437967672
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": false,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "extra_special_tokens": {},
58
+ "mask_token": "<mask>",
59
+ "model_max_length": 256,
60
+ "pad_token": "<pad>",
61
+ "sep_token": "</s>",
62
+ "strip_accents": null,
63
+ "tokenize_chinese_chars": true,
64
+ "tokenizer_class": "MPNetTokenizer",
65
+ "unk_token": "[UNK]"
66
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff