srinivasanAI commited on
Commit
8c54e93
·
verified ·
1 Parent(s): 756d9e5

Initial commit: fine-tuned BGE-small on custom Q&A data

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:100231
8
+ - loss:MultipleNegativesRankingLoss
9
+ base_model: BAAI/bge-small-en-v1.5
10
+ widget:
11
+ - source_sentence: 'Represent this sentence for searching relevant passages: where
12
+ do the chances live on raising hope'
13
+ sentences:
14
+ - Raising Hope James "Jimmy" Chance is a 23-year old, living in the surreal fictional
15
+ town of Natesville, who impregnates a serial killer during a one-night stand.
16
+ Earning custody of his daughter, Hope, after the mother is sentenced to death,
17
+ Jimmy relies on his oddball but well-intentioned family for support in raising
18
+ the child.
19
+ - Quadripoint A quadripoint is a point on the Earth that touches the border of four
20
+ distinct territories.[1][2] The term has never been in common use—it may not have
21
+ been used before 1964 when it was possibly invented by the Office of the Geographer
22
+ of the United States Department of State.[3][n 1] The word does not appear in
23
+ the Oxford English Dictionary or Merriam-Webster Online dictionary, but it does
24
+ appear in the Encyclopædia Britannica,[4] as well as in the World Factbook articles
25
+ on Botswana, Namibia, Zambia, and Zimbabwe, dating as far back as 1990.[5]
26
+ - Show Me the Way to Go Home The song was recorded by several artists in the 1920s,
27
+ including radio personalities The Happiness Boys,[2] Vincent Lopez and his Orchestra,[2]
28
+ and the California Ramblers.[3] Throughout the twentieth into the twenty-first
29
+ century it has been recorded by numerous artists.
30
+ - source_sentence: 'Represent this sentence for searching relevant passages: who wrote
31
+ the book of john in the bible'
32
+ sentences:
33
+ - Gospel of John Although the Gospel of John is anonymous,[1] Christian tradition
34
+ historically has attributed it to John the Apostle, son of Zebedee and one of
35
+ Jesus' Twelve Apostles. The gospel is so closely related in style and content
36
+ to the three surviving Johannine epistles that commentators treat the four books,[2]
37
+ along with the Book of Revelation, as a single corpus of Johannine literature,
38
+ albeit not necessarily written by the same author.[Notes 1]
39
+ - Levi Strauss & Co. Levi Strauss & Co. /ˌliːvaɪ ˈstraʊs/ is a privately held[5]
40
+ American clothing company known worldwide for its Levi's /ˌliːvaɪz/ brand of denim
41
+ jeans. It was founded in May 1853[6] when German immigrant Levi Strauss came from
42
+ Buttenheim, Bavaria, to San Francisco, California to open a west coast branch
43
+ of his brothers' New York dry goods business.[7] The company's corporate headquarters
44
+ is located in the Levi's Plaza in San Francisco.[8]
45
+ - Saturday Night Fever Tony's friends come to the car along with an intoxicated
46
+ Annette. Joey says she has agreed to have sex with everyone. Tony tries to lead
47
+ her away, but is subdued by Double J and Joey, and sullenly leaves with the group
48
+ in the car. Double J and Joey rape Annette. Bobby C. pulls the car over on the
49
+ Verrazano-Narrows Bridge for their usual cable-climbing antics. Instead of abstaining
50
+ as usual, Bobby performs stunts more recklessly than the rest of the gang. Realizing
51
+ that he is acting recklessly, Tony tries to get him to come down. Bobby's strong
52
+ sense of despair, the situation with Pauline, and Tony's broken promise to call
53
+ him earlier that day all lead to a suicidal tirade about Tony's lack of caring
54
+ before Bobby slips and falls to his death in the water below.
55
+ - source_sentence: 'Represent this sentence for searching relevant passages: what
56
+ type of habitat do sea turtles live in'
57
+ sentences:
58
+ - Turbidity Governments have set standards on the allowable turbidity in drinking
59
+ water. In the United States, systems that use conventional or direct filtration
60
+ methods turbidity cannot be higher than 1.0 nephelometric turbidity units (NTU)
61
+ at the plant outlet and all samples for turbidity must be less than or equal to
62
+ 0.3 NTU for at least 95 percent of the samples in any month. Systems that use
63
+ filtration other than the conventional or direct filtration must follow state
64
+ limits, which must include turbidity at no time exceeding 5 NTU. Many drinking
65
+ water utilities strive to achieve levels as low as 0.1 NTU.[11] The European standards
66
+ for turbidity state that it must be no more than 4 NTU.[12] The World Health Organization,
67
+ establishes that the turbidity of drinking water should not be more than 5 NTU,
68
+ and should ideally be below 1 NTU.[13]
69
+ - 'List of 1924 Winter Olympics medal winners Finnish speed skater Clas Thunberg
70
+ topped the medal count with five medals: three golds, one silver, and one bronze.
71
+ One of his competitors, Roald Larsen of Norway, also won five medals, with two
72
+ silver and three bronze medal-winning performances.[3] The first gold medalist
73
+ at these Games—and therefore the first gold medalist in Winter Olympic history—was
74
+ American speed skater Charles Jewtraw. Only one medal change took place after
75
+ the Games: in the ski jump competition, a marking error deprived American athlete
76
+ Anders Haugen of a bronze medal. Haugen pursued an appeal to the IOC many years
77
+ after the fact; he was awarded the medal after a 1974 decision in his favor.[1]'
78
+ - Sea turtle Sea turtles are generally found in the waters over continental shelves.
79
+ During the first three to five years of life, sea turtles spend most of their
80
+ time in the pelagic zone floating in seaweed mats. Green sea turtles in particular
81
+ are often found in Sargassum mats, in which they find shelter and food.[14] Once
82
+ the sea turtle has reached adulthood it moves closer to the shore.[15] Females
83
+ will come ashore to lay their eggs on sandy beaches during the nesting season.[16]
84
+ - source_sentence: 'Represent this sentence for searching relevant passages: what
85
+ triggers the release of calcium from the sarcoplasmic reticulum'
86
+ sentences:
87
+ - Pretty Little Liars (season 7) The season consisted of 20 episodes, in which ten
88
+ episodes aired in the summer of 2016, with the remaining ten episodes aired from
89
+ April 2017.[2][3][4] The season's premiere aired on June 21, 2016, on Freeform.[5]
90
+ Production and filming began in the end of March 2016, which was confirmed by
91
+ showrunner I. Marlene King.[6] The season premiere was written by I. Marlene King
92
+ and directed by Ron Lagomarsino.[7] King revealed the title of the premiere on
93
+ Twitter on March 17, 2016.[8] On August 29, 2016, it was confirmed that this would
94
+ be the final season of the series.[9]
95
+ - Wentworth (TV series) A seventh season was commissioned in April 2018, before
96
+ the sixth-season premiere, with filming commencing the following week and a premiere
97
+ set for 2019.
98
+ - Sarcoplasmic reticulum Calcium ion release from the SR, occurs in the junctional
99
+ SR/terminal cisternae through a ryanodine receptor (RyR) and is known as a calcium
100
+ spark.[10] There are three types of ryanodine receptor, RyR1 (in skeletal muscle),
101
+ RyR2 (in cardiac muscle) and RyR3 (in the brain).[11] Calcium release through
102
+ ryanodine receptors in the SR is triggered differently in different muscles. In
103
+ cardiac and smooth muscle an electrical impulse (action potential) triggers calcium
104
+ ions to enter the cell through an L-type calcium channel located in the cell membrane
105
+ (smooth muscle) or T-tubule membrane (cardiac muscle). These calcium ions bind
106
+ to and activate the RyR, producing a larger increase in intracellular calcium.
107
+ In skeletal muscle, however, the L-type calcium channel is bound to the RyR. Therefore
108
+ activation of the L-type calcium channel, via an action potential, activates the
109
+ RyR directly, causing calcium release (see calcium sparks for more details).[12]
110
+ Also, caffeine (found in coffee) can bind to and stimulate RyR. Caffeine works
111
+ by making the RyR more sensitive to either the action potential (skeletal muscle)
112
+ or calcium (cardiac or smooth muscle) therefore producing calcium sparks more
113
+ often (this can result in increased heart rate, which is why we feel more awake
114
+ after coffee).[13]
115
+ - source_sentence: 'Represent this sentence for searching relevant passages: what
116
+ topic do all scientific questions have in common'
117
+ sentences:
118
+ - 'Jane Wyatt Wyatt portrayed Amanda Grayson, Spock''s mother and Ambassador Sarek''s
119
+ (Mark Lenard) wife, in the 1967 episode "Journey to Babel" of the original NBC
120
+ series, Star Trek, and the 1986 film Star Trek IV: The Voyage Home.[9] Wyatt was
121
+ once quoted as saying her fan mail for these two appearances in this role exceeded
122
+ that of Lost Horizon. In 1969, she made a guest appearance on Here Come the Brides,
123
+ but did not have any scenes with Mark Lenard, who was starring on the show as
124
+ sawmill owner Aaron Stemple.'
125
+ - Minnesota Vikings The Vikings played in Super Bowl XI, their third Super Bowl
126
+ (fourth overall) in four years, against the Oakland Raiders at the Rose Bowl in
127
+ Pasadena, California, on January 9, 1977. The Vikings, however, lost 32–14.[1]
128
+ - List of topics characterized as pseudoscience Criticism of pseudoscience, generally
129
+ by the scientific community or skeptical organizations, involves critiques of
130
+ the logical, methodological, or rhetorical bases of the topic in question.[1]
131
+ Though some of the listed topics continue to be investigated scientifically, others
132
+ were only subject to scientific research in the past, and today are considered
133
+ refuted but resurrected in a pseudoscientific fashion. Other ideas presented here
134
+ are entirely non-scientific, but have in one way or another infringed on scientific
135
+ domains or practices.
136
+ pipeline_tag: sentence-similarity
137
+ library_name: sentence-transformers
138
+ ---
139
+
140
+ # SentenceTransformer based on BAAI/bge-small-en-v1.5
141
+
142
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
143
+
144
+ ## Model Details
145
+
146
+ ### Model Description
147
+ - **Model Type:** Sentence Transformer
148
+ - **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
149
+ - **Maximum Sequence Length:** 512 tokens
150
+ - **Output Dimensionality:** 384 dimensions
151
+ - **Similarity Function:** Cosine Similarity
152
+ <!-- - **Training Dataset:** Unknown -->
153
+ <!-- - **Language:** Unknown -->
154
+ <!-- - **License:** Unknown -->
155
+
156
+ ### Model Sources
157
+
158
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
159
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
160
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
161
+
162
+ ### Full Model Architecture
163
+
164
+ ```
165
+ SentenceTransformer(
166
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
167
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
168
+ (2): Normalize()
169
+ )
170
+ ```
171
+
172
+ ## Usage
173
+
174
+ ### Direct Usage (Sentence Transformers)
175
+
176
+ First install the Sentence Transformers library:
177
+
178
+ ```bash
179
+ pip install -U sentence-transformers
180
+ ```
181
+
182
+ Then you can load this model and run inference.
183
+ ```python
184
+ from sentence_transformers import SentenceTransformer
185
+
186
+ # Download from the 🤗 Hub
187
+ model = SentenceTransformer("srinivasanAI/bge-small-my-qna-model")
188
+ # Run inference
189
+ sentences = [
190
+ 'Represent this sentence for searching relevant passages: what topic do all scientific questions have in common',
191
+ 'List of topics characterized as pseudoscience Criticism of pseudoscience, generally by the scientific community or skeptical organizations, involves critiques of the logical, methodological, or rhetorical bases of the topic in question.[1] Though some of the listed topics continue to be investigated scientifically, others were only subject to scientific research in the past, and today are considered refuted but resurrected in a pseudoscientific fashion. Other ideas presented here are entirely non-scientific, but have in one way or another infringed on scientific domains or practices.',
192
+ 'Jane Wyatt Wyatt portrayed Amanda Grayson, Spock\'s mother and Ambassador Sarek\'s (Mark Lenard) wife, in the 1967 episode "Journey to Babel" of the original NBC series, Star Trek, and the 1986 film Star Trek IV: The Voyage Home.[9] Wyatt was once quoted as saying her fan mail for these two appearances in this role exceeded that of Lost Horizon. In 1969, she made a guest appearance on Here Come the Brides, but did not have any scenes with Mark Lenard, who was starring on the show as sawmill owner Aaron Stemple.',
193
+ ]
194
+ embeddings = model.encode(sentences)
195
+ print(embeddings.shape)
196
+ # [3, 384]
197
+
198
+ # Get the similarity scores for the embeddings
199
+ similarities = model.similarity(embeddings, embeddings)
200
+ print(similarities.shape)
201
+ # [3, 3]
202
+ ```
203
+
204
+ <!--
205
+ ### Direct Usage (Transformers)
206
+
207
+ <details><summary>Click to see the direct usage in Transformers</summary>
208
+
209
+ </details>
210
+ -->
211
+
212
+ <!--
213
+ ### Downstream Usage (Sentence Transformers)
214
+
215
+ You can finetune this model on your own dataset.
216
+
217
+ <details><summary>Click to expand</summary>
218
+
219
+ </details>
220
+ -->
221
+
222
+ <!--
223
+ ### Out-of-Scope Use
224
+
225
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
226
+ -->
227
+
228
+ <!--
229
+ ## Bias, Risks and Limitations
230
+
231
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
232
+ -->
233
+
234
+ <!--
235
+ ### Recommendations
236
+
237
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
238
+ -->
239
+
240
+ ## Training Details
241
+
242
+ ### Training Dataset
243
+
244
+ #### Unnamed Dataset
245
+
246
+ * Size: 100,231 training samples
247
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
248
+ * Approximate statistics based on the first 1000 samples:
249
+ | | sentence_0 | sentence_1 |
250
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
251
+ | type | string | string |
252
+ | details | <ul><li>min: 18 tokens</li><li>mean: 19.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 139.68 tokens</li><li>max: 512 tokens</li></ul> |
253
+ * Samples:
254
+ | sentence_0 | sentence_1 |
255
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
256
+ | <code>Represent this sentence for searching relevant passages: where did strangers prey at night take place</code> | <code>The Strangers: Prey at Night In a secluded trailer park in Salem, Arkansas, the three masked killers, The Walker family — Dollface, Pin Up Girl, and the Man in the Mask — arrive. Dollface kills a female occupant and then lies down in bed next to the woman's sleeping husband.</code> |
257
+ | <code>Represent this sentence for searching relevant passages: what is the average height of the highest peaks in the drakensberg mountain range</code> | <code>Drakensberg During the past 20 million years, further massive upliftment, especially in the East, has taken place in Southern Africa. As a result, most of the plateau lies above 1,000 m (3,300 ft) despite the extensive erosion. The plateau is tilted such that its highest point is in the east, and it slopes gently downwards towards the west and south. The elevation of the edge of the eastern escarpments is typically in excess of 2,000 m (6,600 ft). It reaches its highest point (over 3,000 m (9,800 ft)) where the escarpment forms part of the international border between Lesotho and the South African province of KwaZulu-Natal.[5][8]</code> |
258
+ | <code>Represent this sentence for searching relevant passages: name the two epics of india which are woven around with legends</code> | <code>Indian epic poetry Indian epic poetry is the epic poetry written in the Indian subcontinent, traditionally called Kavya (or Kāvya; Sanskrit: काव्य, IAST: kāvyá). The Ramayana and the Mahabharata, which were originally composed in Sanskrit and later translated into many other Indian languages, and The Five Great Epics of Tamil Literature and Sangam literature are some of the oldest surviving epic poems ever written.[1]</code> |
259
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
260
+ ```json
261
+ {
262
+ "scale": 20.0,
263
+ "similarity_fct": "cos_sim"
264
+ }
265
+ ```
266
+
267
+ ### Training Hyperparameters
268
+ #### Non-Default Hyperparameters
269
+
270
+ - `per_device_train_batch_size`: 32
271
+ - `per_device_eval_batch_size`: 32
272
+ - `num_train_epochs`: 1
273
+ - `multi_dataset_batch_sampler`: round_robin
274
+
275
+ #### All Hyperparameters
276
+ <details><summary>Click to expand</summary>
277
+
278
+ - `overwrite_output_dir`: False
279
+ - `do_predict`: False
280
+ - `eval_strategy`: no
281
+ - `prediction_loss_only`: True
282
+ - `per_device_train_batch_size`: 32
283
+ - `per_device_eval_batch_size`: 32
284
+ - `per_gpu_train_batch_size`: None
285
+ - `per_gpu_eval_batch_size`: None
286
+ - `gradient_accumulation_steps`: 1
287
+ - `eval_accumulation_steps`: None
288
+ - `torch_empty_cache_steps`: None
289
+ - `learning_rate`: 5e-05
290
+ - `weight_decay`: 0.0
291
+ - `adam_beta1`: 0.9
292
+ - `adam_beta2`: 0.999
293
+ - `adam_epsilon`: 1e-08
294
+ - `max_grad_norm`: 1
295
+ - `num_train_epochs`: 1
296
+ - `max_steps`: -1
297
+ - `lr_scheduler_type`: linear
298
+ - `lr_scheduler_kwargs`: {}
299
+ - `warmup_ratio`: 0.0
300
+ - `warmup_steps`: 0
301
+ - `log_level`: passive
302
+ - `log_level_replica`: warning
303
+ - `log_on_each_node`: True
304
+ - `logging_nan_inf_filter`: True
305
+ - `save_safetensors`: True
306
+ - `save_on_each_node`: False
307
+ - `save_only_model`: False
308
+ - `restore_callback_states_from_checkpoint`: False
309
+ - `no_cuda`: False
310
+ - `use_cpu`: False
311
+ - `use_mps_device`: False
312
+ - `seed`: 42
313
+ - `data_seed`: None
314
+ - `jit_mode_eval`: False
315
+ - `use_ipex`: False
316
+ - `bf16`: False
317
+ - `fp16`: False
318
+ - `fp16_opt_level`: O1
319
+ - `half_precision_backend`: auto
320
+ - `bf16_full_eval`: False
321
+ - `fp16_full_eval`: False
322
+ - `tf32`: None
323
+ - `local_rank`: 0
324
+ - `ddp_backend`: None
325
+ - `tpu_num_cores`: None
326
+ - `tpu_metrics_debug`: False
327
+ - `debug`: []
328
+ - `dataloader_drop_last`: False
329
+ - `dataloader_num_workers`: 0
330
+ - `dataloader_prefetch_factor`: None
331
+ - `past_index`: -1
332
+ - `disable_tqdm`: False
333
+ - `remove_unused_columns`: True
334
+ - `label_names`: None
335
+ - `load_best_model_at_end`: False
336
+ - `ignore_data_skip`: False
337
+ - `fsdp`: []
338
+ - `fsdp_min_num_params`: 0
339
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
340
+ - `fsdp_transformer_layer_cls_to_wrap`: None
341
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
342
+ - `deepspeed`: None
343
+ - `label_smoothing_factor`: 0.0
344
+ - `optim`: adamw_torch
345
+ - `optim_args`: None
346
+ - `adafactor`: False
347
+ - `group_by_length`: False
348
+ - `length_column_name`: length
349
+ - `ddp_find_unused_parameters`: None
350
+ - `ddp_bucket_cap_mb`: None
351
+ - `ddp_broadcast_buffers`: False
352
+ - `dataloader_pin_memory`: True
353
+ - `dataloader_persistent_workers`: False
354
+ - `skip_memory_metrics`: True
355
+ - `use_legacy_prediction_loop`: False
356
+ - `push_to_hub`: False
357
+ - `resume_from_checkpoint`: None
358
+ - `hub_model_id`: None
359
+ - `hub_strategy`: every_save
360
+ - `hub_private_repo`: None
361
+ - `hub_always_push`: False
362
+ - `hub_revision`: None
363
+ - `gradient_checkpointing`: False
364
+ - `gradient_checkpointing_kwargs`: None
365
+ - `include_inputs_for_metrics`: False
366
+ - `include_for_metrics`: []
367
+ - `eval_do_concat_batches`: True
368
+ - `fp16_backend`: auto
369
+ - `push_to_hub_model_id`: None
370
+ - `push_to_hub_organization`: None
371
+ - `mp_parameters`:
372
+ - `auto_find_batch_size`: False
373
+ - `full_determinism`: False
374
+ - `torchdynamo`: None
375
+ - `ray_scope`: last
376
+ - `ddp_timeout`: 1800
377
+ - `torch_compile`: False
378
+ - `torch_compile_backend`: None
379
+ - `torch_compile_mode`: None
380
+ - `include_tokens_per_second`: False
381
+ - `include_num_input_tokens_seen`: False
382
+ - `neftune_noise_alpha`: None
383
+ - `optim_target_modules`: None
384
+ - `batch_eval_metrics`: False
385
+ - `eval_on_start`: False
386
+ - `use_liger_kernel`: False
387
+ - `liger_kernel_config`: None
388
+ - `eval_use_gather_object`: False
389
+ - `average_tokens_across_devices`: False
390
+ - `prompts`: None
391
+ - `batch_sampler`: batch_sampler
392
+ - `multi_dataset_batch_sampler`: round_robin
393
+
394
+ </details>
395
+
396
+ ### Training Logs
397
+ | Epoch | Step | Training Loss |
398
+ |:------:|:----:|:-------------:|
399
+ | 0.1596 | 500 | 0.0556 |
400
+ | 0.3192 | 1000 | 0.0245 |
401
+ | 0.4788 | 1500 | 0.0236 |
402
+ | 0.6384 | 2000 | 0.0179 |
403
+ | 0.7980 | 2500 | 0.0202 |
404
+ | 0.9575 | 3000 | 0.0184 |
405
+
406
+
407
+ ### Framework Versions
408
+ - Python: 3.11.13
409
+ - Sentence Transformers: 4.1.0
410
+ - Transformers: 4.53.3
411
+ - PyTorch: 2.6.0+cu124
412
+ - Accelerate: 1.9.0
413
+ - Datasets: 4.0.0
414
+ - Tokenizers: 0.21.2
415
+
416
+ ## Citation
417
+
418
+ ### BibTeX
419
+
420
+ #### Sentence Transformers
421
+ ```bibtex
422
+ @inproceedings{reimers-2019-sentence-bert,
423
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
424
+ author = "Reimers, Nils and Gurevych, Iryna",
425
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
426
+ month = "11",
427
+ year = "2019",
428
+ publisher = "Association for Computational Linguistics",
429
+ url = "https://arxiv.org/abs/1908.10084",
430
+ }
431
+ ```
432
+
433
+ #### MultipleNegativesRankingLoss
434
+ ```bibtex
435
+ @misc{henderson2017efficient,
436
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
437
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
438
+ year={2017},
439
+ eprint={1705.00652},
440
+ archivePrefix={arXiv},
441
+ primaryClass={cs.CL}
442
+ }
443
+ ```
444
+
445
+ <!--
446
+ ## Glossary
447
+
448
+ *Clearly define terms in order to be accessible across audiences.*
449
+ -->
450
+
451
+ <!--
452
+ ## Model Card Authors
453
+
454
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
455
+ -->
456
+
457
+ <!--
458
+ ## Model Card Contact
459
+
460
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
461
+ -->
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 384,
10
+ "id2label": {
11
+ "0": "LABEL_0"
12
+ },
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 1536,
15
+ "label2id": {
16
+ "LABEL_0": 0
17
+ },
18
+ "layer_norm_eps": 1e-12,
19
+ "max_position_embeddings": 512,
20
+ "model_type": "bert",
21
+ "num_attention_heads": 12,
22
+ "num_hidden_layers": 12,
23
+ "pad_token_id": 0,
24
+ "position_embedding_type": "absolute",
25
+ "torch_dtype": "float32",
26
+ "transformers_version": "4.53.3",
27
+ "type_vocab_size": 2,
28
+ "use_cache": true,
29
+ "vocab_size": 30522
30
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.53.3",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4838de2b2870ef5f6a1c2872d9aaf915484a289608042c7376e8ae8409a355e
3
+ size 133462128
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 512,
51
+ "model_max_length": 512,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff