Jrinky commited on
Commit
8616223
·
verified ·
1 Parent(s): 3678c80

Add new SentenceTransformer model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,667 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:589508
8
+ - loss:CachedInfonce
9
+ base_model: jinaai/jina-embeddings-v3
10
+ widget:
11
+ - source_sentence: What are some examples of postgraduate fellowships in the United
12
+ States and Canada
13
+ sentences:
14
+ - "Fellowships as a training program\nFellowships may involve a short placement\
15
+ \ for capacity building, e.g., to get more experience in government, such as the\
16
+ \ American Association for the Advancement of Science's fellowships and the American\
17
+ \ Academy of Arts and Sciences Fellowship programs. Some institutions offer fellowships\
18
+ \ as a professional training program as well as a financial grant, such as the\
19
+ \ Balsillie School of International Affairs, where tuition and other fees are\
20
+ \ paid by the fellowship. Fellowships as a special membership grade\n\nFellows\
21
+ \ are often the highest grade of membership of many professional associations\
22
+ \ or learned societies, for example, the Chartered Institute of Arbitrators, the\
23
+ \ Chartered Governance Institute or Royal College of Surgeons. Lower grades are\
24
+ \ referred to as members (who typically share voting rights with the fellows),\
25
+ \ or associates (who may or may not, depending on whether \"associate\" status\
26
+ \ is a form of full membership). Additional grades of membership exist in, for\
27
+ \ example, the IEEE and the ACM. Fellowships of this type can be awarded as a\
28
+ \ title of honor in their own right, e.g. the Fellowship of the Royal Society\
29
+ \ (FRS). Exclusive learned societies such as the Royal Society have Fellow as\
30
+ \ the only grade of membership. Appointment as an honorary fellow in a learned\
31
+ \ or professional society can be either to honour exceptional achievement or service\
32
+ \ within the professional domain of the awarding body or to honour contributions\
33
+ \ related to the domain from someone who is professionally outside it. Membership\
34
+ \ of the awarding body may or may not be a requirement. How a fellowship is awarded\
35
+ \ varies for each society, but may typically involve some or all of these:\n A\
36
+ \ qualifying period in a lower grade\n Passing a series of examinations\n Nomination\
37
+ \ by two existing fellows who know the applicant professionally\n Evidence of\
38
+ \ continued formal training post-qualification\n Evidence of substantial achievement\
39
+ \ in the subject area\n Submission of a thesis or portfolio of works which will\
40
+ \ be examined\n Election by a vote of the fellowship\n\nIn ancient universities\n\
41
+ \nAt the ancient universities of the University of Oxford, the University of Cambridge,\
42
+ \ and Trinity College, Dublin, members of the teaching staff typically have two\
43
+ \ affiliations: one as a reader, lecturer, or other academic rank within a department\
44
+ \ of the university, as at other universities, and a second affiliation as a fellow\
45
+ \ of one of the colleges of the university. The fellows, sometimes referred to\
46
+ \ as university dons, form the governing body of the college. They may elect a\
47
+ \ council to handle day-to-day management."
48
+ - If you are an enrolled domestic or international student studying a full degree
49
+ program, you may be eligible to study overseas! We have over 70 partner institutions
50
+ worldwide and the opportunities are endless. Visit our USC International and Study
51
+ Overseas blog to learn more about the amazing experiences our students are having
52
+ abroad.
53
+ - 'The title (senior) fellow can also be bestowed to an academic member of staff
54
+ upon retirement who continues to be affiliated to a university in the United Kingdom.
55
+ The term teaching fellow or teaching assistant is used, in the United States and
56
+ United Kingdom, in secondary school, high school and middle school setting for
57
+ students or adults that assist a teacher with one or more classes. Medical fellowships
58
+
59
+
60
+ In US medical institutions, a fellow refers to someone who has completed residency
61
+ training (e.g. in internal medicine, pediatrics, general surgery, etc.) and is
62
+ currently in a 1 to 3 year subspecialty training program (e.g. cardiology, pediatric
63
+ nephrology, transplant surgery, etc.). Research fellowships
64
+
65
+
66
+ As an academic position
67
+
68
+
69
+ The title of research fellow may be used to denote an academic position at a university
70
+ or a similar institution; it is roughly equivalent to the title of lecturer in
71
+ the Commonwealth teaching career pathway. As a financial grant
72
+
73
+ Research fellow may also refer to the recipient of academic financial grant or
74
+ scholarship. For example, in Germany, institutions such as the Alexander von Humboldt
75
+ Foundation offer research fellowship for postdoctoral research and refer to the
76
+ holder as research fellows, while the award holder may formally hold a specific
77
+ academic title at their home institution (e.g., Privatdozent). These are often
78
+ shortened to the name of the programme or organization, e.g. Dorothy Hodgkin Fellow
79
+ rather than Dorothy Hodgkin Research Fellow, except where this might cause confusion
80
+ with another fellowship, (e.g. Royal Society University Research Fellowship.)'
81
+ - "In the context of graduate school in the United States and Canada, a fellow is\
82
+ \ a recipient of a postgraduate fellowship. Examples include the NSF Graduate\
83
+ \ Research Fellowship, the DoD National Defense Science and Engineering Graduate\
84
+ \ Fellowship, the DOE Computational Science Graduate Fellowship, the Guggenheim\
85
+ \ Fellowship, the Rosenthal Fellowship, the Frank Knox Memorial Fellowship, the\
86
+ \ Woodrow Wilson Teaching Fellowship and the Presidential Management Fellowship.\
87
+ \ It is granted to prospective or current students, on the basis of their academic\
88
+ \ or research achievements. In the UK, research fellowships are awarded to support\
89
+ \ postdoctoral researchers such as those funded by the Wellcome Trust and the\
90
+ \ Biotechnology and Biological Sciences Research Council (BBSRC). At ETH Zurich,\
91
+ \ postdoctoral fellowships support incoming researchers. The MacArthur Fellows\
92
+ \ Program (aka \"genius grant\") as prestigious research fellowship awarded in\
93
+ \ the United States. Fellowships as a training program\nFellowships may involve\
94
+ \ a short placement for capacity building, e.g., to get more experience in government,\
95
+ \ such as the American Association for the Advancement of Science's fellowships\
96
+ \ and the American Academy of Arts and Sciences Fellowship programs. Some institutions\
97
+ \ offer fellowships as a professional training program as well as a financial\
98
+ \ grant, such as the Balsillie School of International Affairs, where tuition\
99
+ \ and other fees are paid by the fellowship. Fellowships as a special membership\
100
+ \ grade\n\nFellows are often the highest grade of membership of many professional\
101
+ \ associations or learned societies, for example, the Chartered Institute of Arbitrators,\
102
+ \ the Chartered Governance Institute or Royal College of Surgeons. Lower grades\
103
+ \ are referred to as members (who typically share voting rights with the fellows),\
104
+ \ or associates (who may or may not, depending on whether \"associate\" status\
105
+ \ is a form of full membership). Additional grades of membership exist in, for\
106
+ \ example, the IEEE and the ACM. Fellowships of this type can be awarded as a\
107
+ \ title of honor in their own right, e.g. the Fellowship of the Royal Society\
108
+ \ (FRS). Exclusive learned societies such as the Royal Society have Fellow as\
109
+ \ the only grade of membership. Appointment as an honorary fellow in a learned\
110
+ \ or professional society can be either to honour exceptional achievement or service\
111
+ \ within the professional domain of the awarding body or to honour contributions\
112
+ \ related to the domain from someone who is professionally outside it. Membership\
113
+ \ of the awarding body may or may not be a requirement. How a fellowship is awarded\
114
+ \ varies for each society, but may typically involve some or all of these:\n A\
115
+ \ qualifying period in a lower grade\n Passing a series of examinations\n Nomination\
116
+ \ by two existing fellows who know the applicant professionally\n Evidence of\
117
+ \ continued formal training post-qualification\n Evidence of substantial achievement\
118
+ \ in the subject area\n Submission of a thesis or portfolio of works which will\
119
+ \ be examined\n Election by a vote of the fellowship\n\nIn ancient universities\n\
120
+ \nAt the ancient universities of the University of Oxford, the University of Cambridge,\
121
+ \ and Trinity College, Dublin, members of the teaching staff typically have two\
122
+ \ affiliations: one as a reader, lecturer, or other academic rank within a department\
123
+ \ of the university, as at other universities, and a second affiliation as a fellow\
124
+ \ of one of the colleges of the university. The fellows, sometimes referred to\
125
+ \ as university dons, form the governing body of the college. They may elect a\
126
+ \ council to handle day-to-day management. All fellows are entitled to certain\
127
+ \ privileges within their colleges, which may include dining at High Table (free\
128
+ \ of charge) and possibly the right to a room in college (free of charge). At\
129
+ \ Cambridge, retired academics may remain fellows. At Oxford, however, a Governing\
130
+ \ Body fellow would normally be elected a fellow emeritus and would leave the\
131
+ \ Governing Body upon his or her retirement. Distinguished old members of the\
132
+ \ college, or its benefactors and friends, might also be elected 'Honorary Fellow',\
133
+ \ normally for life; but beyond limited dining rights this is merely an honour.\
134
+ \ Most Oxford colleges have 'Fellows by Special Election' or 'Supernumerary Fellows',\
135
+ \ who may be members of the teaching staff, but not necessarily members of the\
136
+ \ Governing Body. Some senior administrators of a college such as bursars are\
137
+ \ made fellows, and thereby become members of the governing body, because of their\
138
+ \ importance to the running of a college."
139
+ - source_sentence: What kind of plants or decorations are described as popular, fresh,
140
+ and plentiful in the garden at this time of year
141
+ sentences:
142
+ - Enjoy the beautiful scent of gardenia, rosemary, and lavender from your garden.
143
+ Hurry this will not last.
144
+ - Things have been given the opportunity to grow whichever they want but not out
145
+ of neglect per se. Somehow it adds whimsy and mystery to the courtyard.
146
+ - I thought it might be fun to show how this garden goes though the season. The
147
+ perennials will be the fastest to clean up.. clear out the pathways, and bed them
148
+ down well.
149
+ - I'm not surprised that they are so popular, they are fresh, green, with the jolly
150
+ berries AND plentiful in the garden, this time of year, and also, so very decorative.
151
+ I am trying to use as little light and dof as possible, a challenge that I love.
152
+ - source_sentence: When was the Santa Venera church in Avola constructed
153
+ sentences:
154
+ - The dome collapsed in the earthquake of 1848, and was not reconstructed until
155
+ 1962 by the engineer Pietro Lojacono. The decorated three story facade, flanked
156
+ by volutes and obelisks, houses a statue of Saint Venera, patron of Avola, above
157
+ the central portal.
158
+ - 'Santa Venera is a Baroque style church located on Piazza Teatro in the town of
159
+ Avola, province of Siracusa, region of Sicily, Italy. History and description
160
+
161
+ Construction of a church at the site took place from 1713-1715 using designs attributed
162
+ to Michelangelo Alessi.'
163
+ - 'The Saint Bavo Church (Dutch: Sint-Bavokerk, Sint-Baafskerk) is a Dutch Reformed
164
+ church building in Aardenburg, Netherlands. The church was founded in 959 by monks
165
+ of the Saint Bavo''s Abbey in Ghent. Due to a rise in population this small church
166
+ was replaced by a Romanesque church which burned down in 1202. In 1220 the current
167
+ tower, nave and transept were built.'
168
+ - The decorated three story facade, flanked by volutes and obelisks, houses a statue
169
+ of Saint Venera, patron of Avola, above the central portal. The interior has three
170
+ naves.
171
+ - source_sentence: What is the last dream the speaker mentions
172
+ sentences:
173
+ - Have you ever felt like the dreams you had have never become reality? Have you
174
+ ever felt like you need someone to spark the flame for you
175
+ - I'm very new to this, so I'm not sure what I'm doing with the technical side of
176
+ things. Please bear with me if I've got anything wrong. "Night Thoughts And Dreams"
177
+ is the first thing I've written in about two years. I used to write all the time,
178
+ but then I just stopped, however "Sherlock" and Benedict Cumberbatch have inspired
179
+ me to have another go.
180
+ - I had a fantastic phone conversation with my brother today. I also had a nightmare
181
+ where a man pulled off his skin like a shirt.
182
+ - They don't bury me without my uniform." "My last dream is to be in Cooperstown-to
183
+ be with those guys.
184
+ - source_sentence: What is the description of the Myrmecoleon and what are its two
185
+ interpretations
186
+ sentences:
187
+ - 'The stone lies at the bottom of the sea and comes to life early in the morning.
188
+ When it rises from its resting-place to the surface of the sea, it opens its mouth
189
+ and takes in some heavenly dew, and the rays of the sun shine around it; thus
190
+ there grows within the stone a most precious, shining pearl indeed, conceived
191
+ from the heavenly dew and given lustre by the rays of the sun." Interpretations
192
+
193
+
194
+ There are two interpretations of what a Myrmecoleon is. In one version, the antlion
195
+ is so called because it is the "lion of ants", a large ant or small animal that
196
+ hides in the dust and kills ants. In the other version, it is a beast that is
197
+ the result of a mating between a lion and an ant. It has the face of a lion and
198
+ the body of an ant, with each part having its appropriate nature. Because the
199
+ lion part will only eat meat and the ant part can only digest grain, the ant-lion
200
+ starves.'
201
+ - It is found in Medieval bestiaries such as the Hortus Sanitatis of Jacob Meydenbach.
202
+ It is also referenced in some sources as a Formicaleon (Antlion), Formicaleun
203
+ or Mirmicioleon.
204
+ - Microdiprion is a genus of sawflies belonging to the family Diprionidae.
205
+ - Macrodon is a genus of marine ray-finned fishes belonging to the family Sciaenidae,
206
+ the drums and croakers.
207
+ pipeline_tag: sentence-similarity
208
+ library_name: sentence-transformers
209
+ ---
210
+
211
+ # SentenceTransformer based on jinaai/jina-embeddings-v3
212
+
213
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) on the hard_negative_merged dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
214
+
215
+ ## Model Details
216
+
217
+ ### Model Description
218
+ - **Model Type:** Sentence Transformer
219
+ - **Base model:** [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) <!-- at revision f1944de8402dcd5f2b03f822a4bc22a7f2de2eb9 -->
220
+ - **Maximum Sequence Length:** 2048 tokens
221
+ - **Output Dimensionality:** 1024 dimensions
222
+ - **Similarity Function:** Cosine Similarity
223
+ - **Training Dataset:**
224
+ - hard_negative_merged
225
+ <!-- - **Language:** Unknown -->
226
+ <!-- - **License:** Unknown -->
227
+
228
+ ### Model Sources
229
+
230
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
231
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
232
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
233
+
234
+ ### Full Model Architecture
235
+
236
+ ```
237
+ SentenceTransformer(
238
+ (transformer): Transformer(
239
+ (auto_model): XLMRobertaLoRA(
240
+ (roberta): XLMRobertaModel(
241
+ (embeddings): XLMRobertaEmbeddings(
242
+ (word_embeddings): ParametrizedEmbedding(
243
+ 250002, 1024, padding_idx=1
244
+ (parametrizations): ModuleDict(
245
+ (weight): ParametrizationList(
246
+ (0): LoRAParametrization()
247
+ )
248
+ )
249
+ )
250
+ (token_type_embeddings): ParametrizedEmbedding(
251
+ 1, 1024
252
+ (parametrizations): ModuleDict(
253
+ (weight): ParametrizationList(
254
+ (0): LoRAParametrization()
255
+ )
256
+ )
257
+ )
258
+ )
259
+ (emb_drop): Dropout(p=0.1, inplace=False)
260
+ (emb_ln): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
261
+ (encoder): XLMRobertaEncoder(
262
+ (layers): ModuleList(
263
+ (0-23): 24 x Block(
264
+ (mixer): MHA(
265
+ (rotary_emb): RotaryEmbedding()
266
+ (Wqkv): ParametrizedLinearResidual(
267
+ in_features=1024, out_features=3072, bias=True
268
+ (parametrizations): ModuleDict(
269
+ (weight): ParametrizationList(
270
+ (0): LoRAParametrization()
271
+ )
272
+ )
273
+ )
274
+ (inner_attn): FlashSelfAttention(
275
+ (drop): Dropout(p=0.1, inplace=False)
276
+ )
277
+ (inner_cross_attn): FlashCrossAttention(
278
+ (drop): Dropout(p=0.1, inplace=False)
279
+ )
280
+ (out_proj): ParametrizedLinear(
281
+ in_features=1024, out_features=1024, bias=True
282
+ (parametrizations): ModuleDict(
283
+ (weight): ParametrizationList(
284
+ (0): LoRAParametrization()
285
+ )
286
+ )
287
+ )
288
+ )
289
+ (dropout1): Dropout(p=0.1, inplace=False)
290
+ (drop_path1): StochasticDepth(p=0.0, mode=row)
291
+ (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
292
+ (mlp): Mlp(
293
+ (fc1): ParametrizedLinear(
294
+ in_features=1024, out_features=4096, bias=True
295
+ (parametrizations): ModuleDict(
296
+ (weight): ParametrizationList(
297
+ (0): LoRAParametrization()
298
+ )
299
+ )
300
+ )
301
+ (fc2): ParametrizedLinear(
302
+ in_features=4096, out_features=1024, bias=True
303
+ (parametrizations): ModuleDict(
304
+ (weight): ParametrizationList(
305
+ (0): LoRAParametrization()
306
+ )
307
+ )
308
+ )
309
+ )
310
+ (dropout2): Dropout(p=0.1, inplace=False)
311
+ (drop_path2): StochasticDepth(p=0.0, mode=row)
312
+ (norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
313
+ )
314
+ )
315
+ )
316
+ (pooler): XLMRobertaPooler(
317
+ (dense): ParametrizedLinear(
318
+ in_features=1024, out_features=1024, bias=True
319
+ (parametrizations): ModuleDict(
320
+ (weight): ParametrizationList(
321
+ (0): LoRAParametrization()
322
+ )
323
+ )
324
+ )
325
+ (activation): Tanh()
326
+ )
327
+ )
328
+ )
329
+ )
330
+ (pooler): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
331
+ (normalizer): Normalize()
332
+ )
333
+ ```
334
+
335
+ ## Usage
336
+
337
+ ### Direct Usage (Sentence Transformers)
338
+
339
+ First install the Sentence Transformers library:
340
+
341
+ ```bash
342
+ pip install -U sentence-transformers
343
+ ```
344
+
345
+ Then you can load this model and run inference.
346
+ ```python
347
+ from sentence_transformers import SentenceTransformer
348
+
349
+ # Download from the 🤗 Hub
350
+ model = SentenceTransformer("Jrinky/jina_final_temp")
351
+ # Run inference
352
+ sentences = [
353
+ 'What is the description of the Myrmecoleon and what are its two interpretations',
354
+ 'The stone lies at the bottom of the sea and comes to life early in the morning. When it rises from its resting-place to the surface of the sea, it opens its mouth and takes in some heavenly dew, and the rays of the sun shine around it; thus there grows within the stone a most precious, shining pearl indeed, conceived from the heavenly dew and given lustre by the rays of the sun." Interpretations\n\nThere are two interpretations of what a Myrmecoleon is. In one version, the antlion is so called because it is the "lion of ants", a large ant or small animal that hides in the dust and kills ants. In the other version, it is a beast that is the result of a mating between a lion and an ant. It has the face of a lion and the body of an ant, with each part having its appropriate nature. Because the lion part will only eat meat and the ant part can only digest grain, the ant-lion starves.',
355
+ 'It is found in Medieval bestiaries such as the Hortus Sanitatis of Jacob Meydenbach. It is also referenced in some sources as a Formicaleon (Antlion), Formicaleun or Mirmicioleon.',
356
+ ]
357
+ embeddings = model.encode(sentences)
358
+ print(embeddings.shape)
359
+ # [3, 1024]
360
+
361
+ # Get the similarity scores for the embeddings
362
+ similarities = model.similarity(embeddings, embeddings)
363
+ print(similarities.shape)
364
+ # [3, 3]
365
+ ```
366
+
367
+ <!--
368
+ ### Direct Usage (Transformers)
369
+
370
+ <details><summary>Click to see the direct usage in Transformers</summary>
371
+
372
+ </details>
373
+ -->
374
+
375
+ <!--
376
+ ### Downstream Usage (Sentence Transformers)
377
+
378
+ You can finetune this model on your own dataset.
379
+
380
+ <details><summary>Click to expand</summary>
381
+
382
+ </details>
383
+ -->
384
+
385
+ <!--
386
+ ### Out-of-Scope Use
387
+
388
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
389
+ -->
390
+
391
+ <!--
392
+ ## Bias, Risks and Limitations
393
+
394
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
395
+ -->
396
+
397
+ <!--
398
+ ### Recommendations
399
+
400
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
401
+ -->
402
+
403
+ ## Training Details
404
+
405
+ ### Training Dataset
406
+
407
+ #### hard_negative_merged
408
+
409
+ * Dataset: hard_negative_merged
410
+ * Size: 589,508 training samples
411
+ * Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, and <code>negative_3</code>
412
+ * Approximate statistics based on the first 1000 samples:
413
+ | | anchor | positive | negative_1 | negative_2 | negative_3 |
414
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
415
+ | type | string | string | string | string | string |
416
+ | details | <ul><li>min: 6 tokens</li><li>mean: 17.37 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 122.81 tokens</li><li>max: 2048 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 128.36 tokens</li><li>max: 2048 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 110.47 tokens</li><li>max: 1920 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 103.93 tokens</li><li>max: 2048 tokens</li></ul> |
417
+ * Samples:
418
+ | anchor | positive | negative_1 | negative_2 | negative_3 |
419
+ |:------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
420
+ | <code>What does the plot of the story revolve around</code> | <code>Respawn points are created when the player accumulates enough blood collected from slain enemies or in-level blood pickups, and idles a certain distance away from immediate level hazards. Plot<br>The plot follows the events of an unnamed young girl's arrival at the Lafcadio Academy for Troubled Young Ladies.</code> | <code>An really interesting idea behind the story and one that had me unable to put it down some nights! View all my reviews</code> | <code>And everything has such meaning and depth behind it. Nothing is just said casually, and it is all so thoughfully laced with emotion and words to draw you in to the story itself.</code> | <code>It has a terribly implication that this flashback may be lasting more than a chapter. It's not as if we aren't learning anything of importance. I'm just not curious where this is going. I'm wondering when it'll finally be over. Not something you want from your audience as a story teller. In no simple terms.</code> |
421
+ | <code>What type of warranty is offered with the Zhumell Signature 10x42 binoculars</code> | <code>The Signature is also backed by Zhumell's full, 25-year, no-fault warranty, ensuring a lifetime of worry-free viewing. The Zhumell Signature 10x42 binoculars will give you plenty of power - whenever you need it, for as long as you need it!</code> | <code>This item is backed by a Limited Lifetime Warranty. In the event this item should fail due to manufacturing defects during intended use, we will exchange the part free of charge (excludes shipping charges) for the original purchaser.</code> | <code>if you have different ideas or better suggestion ,be free to leave message . Warranty and terms:<br>-Warranty year is 1 year under normal use,the warranty period is a year from the date of original purchase.</code> | <code>We have more than 55 years of experience designing, manufacturing and refining custom optical lenses for use in a range of industries. Our production staff follows strict ISO 9001 standards and uses state-of-the-art metrology equipment to test finished lenses for quality and performance.</code> |
422
+ | <code>When did he announce his retirement from all professional rugby</code> | <code>He was named in the Pro12 Dream Teams at the end of the 2014/15 and 2016/17 seasons. In April 2021 he announced his retirement from all professional rugby. International career<br><br>Qualifying to play internationally for Scotland through his Glasgow-born mother, on 24 October 2012 he was named in the full Scottish national team for the 2012 end-of-year rugby union tests.</code> | <code>After retiring from full-time professional football, he worked as a production controller before becoming a sales administrator for International Computers Limited. He lived in Southampton for the rest of his life and died on 28 January 2014.</code> | <code>On December 15 2018, it was announced that he had left WWE voluntarily. Professional boxing record<br>{| class="wikitable" style="text-align:center;"<br>| style="text-align:center;" colspan="8" | 6 Wins (3 knockouts, 3 decisions), 0 Losses, 0 Draws<br>|- style="text-align:center; background:#e3e3e3;"<br>| style="border-style:none none solid solid;" | Res.</code> | <code>Since retiring from football he has worked as a journalist for the Professional Footballers' Association. References<br><br>English men's footballers<br>Bristol City F.C. players<br>Kidderminster Harriers F.C. players<br>Yeovil Town F.C.</code> |
423
+ * Loss: <code>cachedselfloss2.CachedInfonce</code> with these parameters:
424
+ ```json
425
+ {
426
+ "scale": 20.0,
427
+ "similarity_fct": "cos_sim"
428
+ }
429
+ ```
430
+
431
+ ### Evaluation Dataset
432
+
433
+ #### hard_negative_merged
434
+
435
+ * Dataset: hard_negative_merged
436
+ * Size: 589,508 evaluation samples
437
+ * Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, and <code>negative_3</code>
438
+ * Approximate statistics based on the first 1000 samples:
439
+ | | anchor | positive | negative_1 | negative_2 | negative_3 |
440
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
441
+ | type | string | string | string | string | string |
442
+ | details | <ul><li>min: 4 tokens</li><li>mean: 17.27 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 120.45 tokens</li><li>max: 2031 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 123.54 tokens</li><li>max: 2018 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 114.85 tokens</li><li>max: 1860 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 115.74 tokens</li><li>max: 1605 tokens</li></ul> |
443
+ * Samples:
444
+ | anchor | positive | negative_1 | negative_2 | negative_3 |
445
+ |:----------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
446
+ | <code>What could the term 'Golia' refer to</code> | <code>Golia may refer to:<br><br>Golia (surname)<br>Golia, Ganjam<br>Golia Monastery<br>1226 Golia</code> | <code>Gouka may refer to:<br><br> 9708 Gouka, a main-belt asteroid after the Dutch astronomer Adriaan Gouka<br> Eric Gouka (born 1970), Dutch cricketer<br> Gouka, Benin, a town and arrondissement</code> | <code>Gottschelia is a genus of liverworts belonging to the family Cephaloziellaceae.</code> | <code>Agila may refer to:<br><br>Agila I (died 554), Visigothic king<br>Agila II (died 714), Visigothic king<br>Agila 2, the first Filipino satellite<br>Agila (album), a 1996 album by Spanish rock band Extremoduro<br>Agila (film), a 1980 Philippine film directed by Eddie Romero<br>Agila (TV series), a 1987 Philippine teledrama series<br>Agila Town, Benue State, Nigeria<br>Opel Agila or Vauxhall Agila, a city car<br><br>See also<br>Agila division, the 10th Infantry Division of the Philippine Army<br>Aguila (disambiguation)</code> |
447
+ | <code>What is the timeframe in which Itera plans to potentially make an agreement with a financial institution</code> | <code>As Itera's President Igor Makarov reported at today's meeting of the Russian Gas Society in Moscow, the gas company could make an agreement with a financial institution, which would make the most profitable and optimum offer, in the next two to three months. According to him, they are currently holding negotiations with several financial enterprises, which specialize in introducing companies to the financial market.</code> | <code>The process from receipt of the funding proposal to completion of due diligence is incredibly quick, with a goal of 30 days. After initial evaluation of their proposals, a selected number of start-ups, usually 6 to 8, are asked to make preliminary presentations to the steering committee.</code> | <code>Coinexchange, Cryptopia, YoBit, HitBtc, Binance, Bittrex<br>Q1 2018 : Partners announced (Debit card & Merchants) We are currently in negotiation with major payment providers to offer you a worldwide usable card. Q1/2 2018 : ETHX Beta Wallet release (Android, Windows, iOS) and debit cart pre-order<br>Q3 201 : More partnerships Wider range of companies accepting ETHX. First targets are the biggest e-commerce websites. We will release a beta application to collect user reviews and answer to the community. The app is expected to come out in Q1 2018 on Android and later on iOS. We are very sensitive about our community welfare, so we try to do our best to keep our members informed about the latest news. The app will also help us to inform and get suggestions. Ethereum X is community driven. If you are also a cryptography and distributed ledger tech-nology enthusiast and want to support the project, please feel free to contact us. Additional developers as well as community managers for our social...</code> | <code>The project will be floated in the market for solicitation of expression of interest from the potential investors in June 2017. The land slots will be awarded to the successful bidders based on evaluation by the end of August, 2017. The Monitoring and Evaluation (M&E) of forest sites, awarded to successful bidders, will be done in collaboration with the Forestry, Wildlife & Fisheries Department, Government of the Punjab, as per the provisions of PPP Act, 2014, and The Punjab Forest (Amendment) Act, 2016. Revenue sharing will be done in this initiative. The Company in order to effectively reach out to the business community is organizing seminars in collaboration with various Chambers of Commerce & Industry to sensitize business groups to invest in the opportunity.</code> |
448
+ | <code>What role does File History play in the issue being discussed</code> | <code>What has File History got to do with the problem<br>I don't know but maybe someone at DC does<br>I post the question..... get lots of ideas and methods to remove the naughty files, but I still don't know why deleting file history worked unless the file history is tacked onto the file somehow<br>Since then I've been checking more of the "includes folders" for more over-long files and trying to figure what to do with them. The files are easy to find once you start paying attention<br>Open a folder and if it contains extra long files a scroll bar appears at the bottom of the page<br>Found some more files and started playing.</code> | <code>Newspapers feature stories about lost computers and memory sticks but a more common and longstanding problem is about staff accessing records that they have no right to see. It has always been possible for staff to look at paper records, and in most cases, there is no track of record.</code> | <code>In data vault it is referred to as the record source. Background <br>The need to identify systems of record can become acute in organizations where management information systems have been built by taking output data from multiple source systems, re-processing this data, and then re-presenting the result for a new business use.</code> | <code>The idea of preservation, in the sense of both immortalization and protection is addressed. How do we decide what to remember from history, and what do we leave out</code> |
449
+ * Loss: <code>cachedselfloss2.CachedInfonce</code> with these parameters:
450
+ ```json
451
+ {
452
+ "scale": 20.0,
453
+ "similarity_fct": "cos_sim"
454
+ }
455
+ ```
456
+
457
+ ### Training Hyperparameters
458
+ #### Non-Default Hyperparameters
459
+
460
+ - `eval_strategy`: steps
461
+ - `per_device_train_batch_size`: 500
462
+ - `per_device_eval_batch_size`: 500
463
+ - `learning_rate`: 2e-05
464
+ - `num_train_epochs`: 10
465
+ - `warmup_ratio`: 0.1
466
+ - `bf16`: True
467
+ - `batch_sampler`: no_duplicates
468
+
469
+ #### All Hyperparameters
470
+ <details><summary>Click to expand</summary>
471
+
472
+ - `overwrite_output_dir`: False
473
+ - `do_predict`: False
474
+ - `eval_strategy`: steps
475
+ - `prediction_loss_only`: True
476
+ - `per_device_train_batch_size`: 500
477
+ - `per_device_eval_batch_size`: 500
478
+ - `per_gpu_train_batch_size`: None
479
+ - `per_gpu_eval_batch_size`: None
480
+ - `gradient_accumulation_steps`: 1
481
+ - `eval_accumulation_steps`: None
482
+ - `torch_empty_cache_steps`: None
483
+ - `learning_rate`: 2e-05
484
+ - `weight_decay`: 0.0
485
+ - `adam_beta1`: 0.9
486
+ - `adam_beta2`: 0.999
487
+ - `adam_epsilon`: 1e-08
488
+ - `max_grad_norm`: 1.0
489
+ - `num_train_epochs`: 10
490
+ - `max_steps`: -1
491
+ - `lr_scheduler_type`: linear
492
+ - `lr_scheduler_kwargs`: {}
493
+ - `warmup_ratio`: 0.1
494
+ - `warmup_steps`: 0
495
+ - `log_level`: passive
496
+ - `log_level_replica`: warning
497
+ - `log_on_each_node`: True
498
+ - `logging_nan_inf_filter`: True
499
+ - `save_safetensors`: True
500
+ - `save_on_each_node`: False
501
+ - `save_only_model`: False
502
+ - `restore_callback_states_from_checkpoint`: False
503
+ - `no_cuda`: False
504
+ - `use_cpu`: False
505
+ - `use_mps_device`: False
506
+ - `seed`: 42
507
+ - `data_seed`: None
508
+ - `jit_mode_eval`: False
509
+ - `use_ipex`: False
510
+ - `bf16`: True
511
+ - `fp16`: False
512
+ - `fp16_opt_level`: O1
513
+ - `half_precision_backend`: auto
514
+ - `bf16_full_eval`: False
515
+ - `fp16_full_eval`: False
516
+ - `tf32`: None
517
+ - `local_rank`: 0
518
+ - `ddp_backend`: None
519
+ - `tpu_num_cores`: None
520
+ - `tpu_metrics_debug`: False
521
+ - `debug`: []
522
+ - `dataloader_drop_last`: True
523
+ - `dataloader_num_workers`: 0
524
+ - `dataloader_prefetch_factor`: None
525
+ - `past_index`: -1
526
+ - `disable_tqdm`: False
527
+ - `remove_unused_columns`: True
528
+ - `label_names`: None
529
+ - `load_best_model_at_end`: False
530
+ - `ignore_data_skip`: False
531
+ - `fsdp`: []
532
+ - `fsdp_min_num_params`: 0
533
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
534
+ - `tp_size`: 0
535
+ - `fsdp_transformer_layer_cls_to_wrap`: None
536
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
537
+ - `deepspeed`: None
538
+ - `label_smoothing_factor`: 0.0
539
+ - `optim`: adamw_torch
540
+ - `optim_args`: None
541
+ - `adafactor`: False
542
+ - `group_by_length`: False
543
+ - `length_column_name`: length
544
+ - `ddp_find_unused_parameters`: None
545
+ - `ddp_bucket_cap_mb`: None
546
+ - `ddp_broadcast_buffers`: False
547
+ - `dataloader_pin_memory`: True
548
+ - `dataloader_persistent_workers`: False
549
+ - `skip_memory_metrics`: True
550
+ - `use_legacy_prediction_loop`: False
551
+ - `push_to_hub`: False
552
+ - `resume_from_checkpoint`: None
553
+ - `hub_model_id`: None
554
+ - `hub_strategy`: every_save
555
+ - `hub_private_repo`: None
556
+ - `hub_always_push`: False
557
+ - `gradient_checkpointing`: False
558
+ - `gradient_checkpointing_kwargs`: None
559
+ - `include_inputs_for_metrics`: False
560
+ - `include_for_metrics`: []
561
+ - `eval_do_concat_batches`: True
562
+ - `fp16_backend`: auto
563
+ - `push_to_hub_model_id`: None
564
+ - `push_to_hub_organization`: None
565
+ - `mp_parameters`:
566
+ - `auto_find_batch_size`: False
567
+ - `full_determinism`: False
568
+ - `torchdynamo`: None
569
+ - `ray_scope`: last
570
+ - `ddp_timeout`: 1800
571
+ - `torch_compile`: False
572
+ - `torch_compile_backend`: None
573
+ - `torch_compile_mode`: None
574
+ - `dispatch_batches`: None
575
+ - `split_batches`: None
576
+ - `include_tokens_per_second`: False
577
+ - `include_num_input_tokens_seen`: False
578
+ - `neftune_noise_alpha`: None
579
+ - `optim_target_modules`: None
580
+ - `batch_eval_metrics`: False
581
+ - `eval_on_start`: False
582
+ - `use_liger_kernel`: False
583
+ - `eval_use_gather_object`: False
584
+ - `average_tokens_across_devices`: False
585
+ - `prompts`: None
586
+ - `batch_sampler`: no_duplicates
587
+ - `multi_dataset_batch_sampler`: proportional
588
+
589
+ </details>
590
+
591
+ ### Training Logs
592
+ | Epoch | Step | Training Loss | Validation Loss |
593
+ |:------:|:----:|:-------------:|:---------------:|
594
+ | 0.1786 | 40 | 8.7768 | 8.5959 |
595
+ | 0.3571 | 80 | 8.8187 | 8.5129 |
596
+ | 0.5357 | 120 | 8.6175 | 8.2742 |
597
+ | 0.7143 | 160 | 8.0868 | 7.8954 |
598
+ | 0.8929 | 200 | 7.5681 | 7.3531 |
599
+ | 1.0714 | 240 | 7.0288 | 6.5431 |
600
+ | 1.25 | 280 | 6.2266 | 5.8462 |
601
+ | 1.4286 | 320 | 5.4682 | 5.2924 |
602
+ | 1.6071 | 360 | 5.0398 | 4.8148 |
603
+ | 1.7857 | 400 | 4.5158 | 4.4110 |
604
+ | 1.9643 | 440 | 4.184 | 4.0419 |
605
+ | 2.1429 | 480 | 3.7868 | 3.7165 |
606
+ | 2.3214 | 520 | 3.6258 | 3.4216 |
607
+ | 2.5 | 560 | 3.2262 | 3.1530 |
608
+ | 2.6786 | 600 | 3.0175 | 2.9128 |
609
+ | 2.8571 | 640 | 2.75 | 2.6999 |
610
+ | 3.0357 | 680 | 2.4915 | 2.5085 |
611
+
612
+
613
+ ### Framework Versions
614
+ - Python: 3.10.14
615
+ - Sentence Transformers: 3.4.1
616
+ - Transformers: 4.50.0
617
+ - PyTorch: 2.3.1+cu121
618
+ - Accelerate: 1.5.2
619
+ - Datasets: 3.4.1
620
+ - Tokenizers: 0.21.1
621
+
622
+ ## Citation
623
+
624
+ ### BibTeX
625
+
626
+ #### Sentence Transformers
627
+ ```bibtex
628
+ @inproceedings{reimers-2019-sentence-bert,
629
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
630
+ author = "Reimers, Nils and Gurevych, Iryna",
631
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
632
+ month = "11",
633
+ year = "2019",
634
+ publisher = "Association for Computational Linguistics",
635
+ url = "https://arxiv.org/abs/1908.10084",
636
+ }
637
+ ```
638
+
639
+ #### CachedInfonce
640
+ ```bibtex
641
+ @misc{gao2021scaling,
642
+ title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
643
+ author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
644
+ year={2021},
645
+ eprint={2101.06983},
646
+ archivePrefix={arXiv},
647
+ primaryClass={cs.LG}
648
+ }
649
+ ```
650
+
651
+ <!--
652
+ ## Glossary
653
+
654
+ *Clearly define terms in order to be accessible across audiences.*
655
+ -->
656
+
657
+ <!--
658
+ ## Model Card Authors
659
+
660
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
661
+ -->
662
+
663
+ <!--
664
+ ## Model Card Contact
665
+
666
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
667
+ -->
config.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/root/checkpoint-680_merged_3to1",
3
+ "architectures": [
4
+ "XLMRobertaLoRA"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_xlm_roberta.XLMRobertaFlashConfig",
9
+ "AutoModel": "jinaai/xlm-roberta-flash-implementation--modeling_lora.XLMRobertaLoRA",
10
+ "AutoModelForMaskedLM": "jinaai/xlm-roberta-flash-implementation--modeling_xlm_roberta.XLMRobertaForMaskedLM",
11
+ "AutoModelForPreTraining": "jinaai/xlm-roberta-flash-implementation--modeling_xlm_roberta.XLMRobertaForPreTraining"
12
+ },
13
+ "bos_token_id": 0,
14
+ "classifier_dropout": null,
15
+ "emb_pooler": null,
16
+ "eos_token_id": 2,
17
+ "hidden_act": "gelu",
18
+ "hidden_dropout_prob": 0.1,
19
+ "hidden_size": 1024,
20
+ "initializer_range": 0.02,
21
+ "intermediate_size": 4096,
22
+ "layer_norm_eps": 1e-05,
23
+ "load_trained_adapters": true,
24
+ "lora_adaptations": [
25
+ "retrieval.query",
26
+ "retrieval.passage",
27
+ "separation",
28
+ "classification",
29
+ "text-matching"
30
+ ],
31
+ "lora_alpha": 1,
32
+ "lora_dropout_p": 0.0,
33
+ "lora_main_params_trainable": false,
34
+ "lora_rank": 4,
35
+ "matryoshka_dimensions": [
36
+ 32,
37
+ 64,
38
+ 128,
39
+ 256,
40
+ 512,
41
+ 768,
42
+ 1024
43
+ ],
44
+ "max_position_embeddings": 8194,
45
+ "model_type": "xlm-roberta",
46
+ "num_attention_heads": 16,
47
+ "num_hidden_layers": 24,
48
+ "output_past": true,
49
+ "pad_token_id": 1,
50
+ "position_embedding_type": "rotary",
51
+ "rotary_emb_base": 20000.0,
52
+ "task_instructions": {
53
+ "classification": "",
54
+ "retrieval.passage": "Represent the document for retrieval: ",
55
+ "retrieval.query": "Represent the query for retrieving evidence documents: ",
56
+ "separation": "",
57
+ "text-matching": ""
58
+ },
59
+ "torch_dtype": "bfloat16",
60
+ "transformers_version": "4.49.0",
61
+ "truncate_dim": null,
62
+ "type_vocab_size": 1,
63
+ "use_cache": true,
64
+ "use_flash_attn": true,
65
+ "use_reentrant": false,
66
+ "vocab_size": 250002
67
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.49.0",
5
+ "pytorch": "2.3.1+cu121"
6
+ },
7
+ "prompts": {
8
+ "retrieval.query": "Represent the query for retrieving evidence documents: ",
9
+ "retrieval.passage": "Represent the document for retrieval: ",
10
+ "separation": "",
11
+ "classification": "",
12
+ "text-matching": ""
13
+ },
14
+ "default_prompt_name": null,
15
+ "similarity_fn_name": "cosine"
16
+ }
configuration_xlm_roberta.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Dict, List, Optional, Union
2
+
3
+ import torch
4
+ from transformers import PretrainedConfig
5
+
6
+
7
+ class XLMRobertaFlashConfig(PretrainedConfig):
8
+
9
+ model_type = "xlm-roberta"
10
+
11
+ def __init__(
12
+ self,
13
+ vocab_size: int = 250002,
14
+ hidden_size: int = 1024,
15
+ num_hidden_layers: int = 24,
16
+ num_attention_heads: int = 16,
17
+ intermediate_size: int = 4096,
18
+ hidden_act: str = "gelu",
19
+ hidden_dropout_prob: float = 0.1,
20
+ attention_probs_dropout_prob: float = 0.1,
21
+ max_position_embeddings: int = 8194,
22
+ type_vocab_size: int = 1,
23
+ initializer_range: float = 0.02,
24
+ layer_norm_eps: float = 1e-05,
25
+ pad_token_id: int = 1,
26
+ bos_token_id: int = 0,
27
+ eos_token_id: int = 2,
28
+ position_embedding_type: str = "rotary",
29
+ rotary_emb_base: float = 10000.0,
30
+ use_cache: bool = True,
31
+ use_reentrant: bool = False,
32
+ classifier_dropout: Optional[float] = None,
33
+ lora_adaptations: Optional[List[str]] = None,
34
+ task_instructions: Optional[Dict[str, str]] = None,
35
+ lora_rank: int = 4,
36
+ lora_dropout_p: float = 0.0,
37
+ lora_alpha: int = 1,
38
+ lora_main_params_trainable: bool = False,
39
+ load_trained_adapters: bool = False,
40
+ use_flash_attn: bool = True,
41
+ torch_dtype: Optional[Union[str, torch.dtype]] = None,
42
+ emb_pooler: Optional[str] = None,
43
+ matryoshka_dimensions: Optional[List[int]] = None,
44
+ truncate_dim: Optional[int] = None,
45
+ **kwargs: Dict[str, Any],
46
+ ):
47
+ """
48
+ Initialize the XLMRobertaFlashConfig configuration.
49
+
50
+ Args:
51
+ vocab_size (int): Size of the vocabulary.
52
+ hidden_size (int): Dimensionality of the encoder layers and the pooler layer.
53
+ num_hidden_layers (int): Number of hidden layers in the Transformer encoder.
54
+ num_attention_heads (int): Number of attention heads for each attention layer in the Transformer encoder.
55
+ intermediate_size (int): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer.
56
+ hidden_act (str): The activation function to use.
57
+ hidden_dropout_prob (float): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
58
+ attention_probs_dropout_prob (float): The dropout ratio for the attention probabilities.
59
+ max_position_embeddings (int): The maximum length of the position embeddings.
60
+ type_vocab_size (int): The vocabulary size of the token type ids.
61
+ initializer_range (float): The standard deviation for initializing all weight matrices.
62
+ layer_norm_eps (float): The epsilon used by the layer normalization layers.
63
+ pad_token_id (int): The ID of the padding token.
64
+ bos_token_id (int): The ID of the beginning-of-sequence token.
65
+ eos_token_id (int): The ID of the end-of-sequence token.
66
+ position_embedding_type (str): Type of position embeddings. Options are 'absolute', 'alibi', or 'rotary'.
67
+ rotary_emb_base (float): Base for rotary embeddings.
68
+ use_cache (bool): Whether or not the model should return the last key/values attentions (not used by all models).
69
+ use_reentrant (bool): Whether or not the model should enable the 'use_reentrant' flag in gradient checkpointing.
70
+ classifier_dropout (Optional[float]): The dropout ratio for the classification head.
71
+ lora_adaptations (Optional[List[str]]): LoRA adaptations configuration.
72
+ lora_prompts (Optional[Dict[str, str]]): LoRA prompts configuration.
73
+ lora_rank (int): Rank for LoRA adaptations.
74
+ lora_dropout_p (float): Dropout probability for LoRA adaptations.
75
+ lora_alpha (int): Alpha parameter for LoRA.
76
+ lora_main_params_trainable (bool): Whether to make the main model parameters trainable when using LoRA.
77
+ load_trained_adapters (bool): Whether to load trained adapters.
78
+ use_flash_attn (bool): Whether to use FlashAttention.
79
+ torch_dtype (Optional[Union[str, torch.dtype]]): Data type for the tensors.
80
+ emb_pooler (Optional[str]): Pooling layer configuration.
81
+ matryoshka_dimensions (Optional[List[int]]): Configuration for matryoshka dimension reduction.
82
+ truncate_dim (Optional[int]): Dimension to truncate embeddings to, if any.
83
+ **kwargs (Dict[str, Any]): Additional keyword arguments passed to the configuration.
84
+ """
85
+
86
+ super().__init__(
87
+ pad_token_id=pad_token_id,
88
+ bos_token_id=bos_token_id,
89
+ eos_token_id=eos_token_id,
90
+ **kwargs,
91
+ )
92
+
93
+ self.vocab_size = vocab_size
94
+ self.hidden_size = hidden_size
95
+ self.num_hidden_layers = num_hidden_layers
96
+ self.num_attention_heads = num_attention_heads
97
+ self.hidden_act = hidden_act
98
+ self.intermediate_size = intermediate_size
99
+ self.hidden_dropout_prob = hidden_dropout_prob
100
+ self.attention_probs_dropout_prob = attention_probs_dropout_prob
101
+ self.max_position_embeddings = max_position_embeddings
102
+ self.type_vocab_size = type_vocab_size
103
+ self.initializer_range = initializer_range
104
+ self.layer_norm_eps = layer_norm_eps
105
+ self.position_embedding_type = position_embedding_type
106
+ self.rotary_emb_base = rotary_emb_base
107
+ self.use_cache = use_cache
108
+ self.use_reentrant = use_reentrant
109
+ self.classifier_dropout = classifier_dropout
110
+ self.load_trained_adapters = load_trained_adapters
111
+ self.lora_adaptations = lora_adaptations
112
+ self.task_instructions = task_instructions
113
+ self.lora_rank = lora_rank
114
+ self.lora_dropout_p = lora_dropout_p
115
+ self.lora_alpha = lora_alpha
116
+ self.lora_main_params_trainable = lora_main_params_trainable
117
+ self.use_flash_attn = use_flash_attn
118
+ self.emb_pooler = emb_pooler
119
+ self.matryoshka_dimensions = matryoshka_dimensions
120
+ self.truncate_dim = truncate_dim
121
+ if (
122
+ torch_dtype
123
+ and hasattr(torch, torch_dtype)
124
+ and type(getattr(torch, torch_dtype)) is torch.dtype
125
+ ):
126
+ self.torch_dtype = getattr(torch, torch_dtype)
127
+ else:
128
+ self.torch_dtype = torch_dtype
129
+ if not self.use_flash_attn or not torch.cuda.is_available():
130
+ self.torch_dtype = torch.float32
custom_st.py ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import logging
3
+ import os
4
+ from io import BytesIO
5
+ from typing import Any, Dict, List, Optional, Tuple, Union
6
+
7
+ import torch
8
+ from torch import nn
9
+ from transformers import AutoConfig, AutoModel, AutoTokenizer
10
+
11
+ logger = logging.getLogger(__name__)
12
+
13
+
14
+ class Transformer(nn.Module):
15
+ """Huggingface AutoModel to generate token embeddings.
16
+ Loads the correct class, e.g. BERT / RoBERTa etc.
17
+
18
+ Args:
19
+ model_name_or_path: Huggingface models name
20
+ (https://huggingface.co/models)
21
+ max_seq_length: Truncate any inputs longer than max_seq_length
22
+ model_args: Keyword arguments passed to the Huggingface
23
+ Transformers model
24
+ tokenizer_args: Keyword arguments passed to the Huggingface
25
+ Transformers tokenizer
26
+ config_args: Keyword arguments passed to the Huggingface
27
+ Transformers config
28
+ cache_dir: Cache dir for Huggingface Transformers to store/load
29
+ models
30
+ do_lower_case: If true, lowercases the input (independent if the
31
+ model is cased or not)
32
+ tokenizer_name_or_path: Name or path of the tokenizer. When
33
+ None, then model_name_or_path is used
34
+ """
35
+
36
+ save_in_root: bool = True
37
+
38
+ def __init__(
39
+ self,
40
+ model_name_or_path: str,
41
+ max_seq_length: int = None,
42
+ model_args: Dict[str, Any] = None,
43
+ tokenizer_args: Dict[str, Any] = None,
44
+ config_args: Dict[str, Any] = None,
45
+ cache_dir: str = None,
46
+ do_lower_case: bool = False,
47
+ tokenizer_name_or_path: str = None,
48
+ **kwargs,
49
+ ) -> None:
50
+ super().__init__()
51
+ self.config_keys = ["max_seq_length", "do_lower_case"]
52
+ self.do_lower_case = do_lower_case
53
+ if model_args is None:
54
+ model_args = {}
55
+ if tokenizer_args is None:
56
+ tokenizer_args = {}
57
+ if config_args is None:
58
+ config_args = {}
59
+
60
+ if kwargs.get("backend", "torch") != "torch":
61
+ logger.warning(
62
+ f'"jinaai/jina-embeddings-v3" is currently not compatible with the {kwargs["backend"]} backend. '
63
+ 'Continuing with the "torch" backend.'
64
+ )
65
+
66
+ self.config = AutoConfig.from_pretrained(model_name_or_path, **config_args, cache_dir=cache_dir)
67
+
68
+ self._lora_adaptations = self.config.lora_adaptations
69
+ if (
70
+ not isinstance(self._lora_adaptations, list)
71
+ or len(self._lora_adaptations) < 1
72
+ ):
73
+ raise ValueError(
74
+ f"`lora_adaptations` must be a list and contain at least one element"
75
+ )
76
+ self._adaptation_map = {
77
+ name: idx for idx, name in enumerate(self._lora_adaptations)
78
+ }
79
+
80
+ self.default_task = model_args.pop('default_task', None)
81
+
82
+ self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=self.config, cache_dir=cache_dir, **model_args)
83
+
84
+ if max_seq_length is not None and "model_max_length" not in tokenizer_args:
85
+ tokenizer_args["model_max_length"] = max_seq_length
86
+ self.tokenizer = AutoTokenizer.from_pretrained(
87
+ tokenizer_name_or_path if tokenizer_name_or_path is not None else model_name_or_path,
88
+ cache_dir=cache_dir,
89
+ **tokenizer_args,
90
+ )
91
+
92
+ # No max_seq_length set. Try to infer from model
93
+ if max_seq_length is None:
94
+ if (
95
+ hasattr(self.auto_model, "config")
96
+ and hasattr(self.auto_model.config, "max_position_embeddings")
97
+ and hasattr(self.tokenizer, "model_max_length")
98
+ ):
99
+ max_seq_length = min(self.auto_model.config.max_position_embeddings, self.tokenizer.model_max_length)
100
+
101
+ self.max_seq_length = max_seq_length
102
+
103
+ if tokenizer_name_or_path is not None:
104
+ self.auto_model.config.tokenizer_class = self.tokenizer.__class__.__name__
105
+
106
+
107
+ @property
108
+ def default_task(self):
109
+ return self._default_task
110
+
111
+ @default_task.setter
112
+ def default_task(self, task: Union[None, str]):
113
+ self._validate_task(task)
114
+ self._default_task = task
115
+
116
+
117
+ def _validate_task(self, task: str):
118
+ if task and task not in self._lora_adaptations:
119
+ raise ValueError(
120
+ f"Unsupported task '{task}'. "
121
+ f"Supported tasks are: {', '.join(self.config.lora_adaptations)}. "
122
+ f"Alternatively, don't pass the `task` argument to disable LoRA."
123
+ )
124
+
125
+ def forward(
126
+ self, features: Dict[str, torch.Tensor], task: Optional[str] = None
127
+ ) -> Dict[str, torch.Tensor]:
128
+ """Returns token_embeddings, cls_token"""
129
+ self._validate_task(task)
130
+ task = task or self.default_task
131
+ adapter_mask = None
132
+ if task:
133
+ task_id = self._adaptation_map[task]
134
+ num_examples = features['input_ids'].size(0)
135
+ adapter_mask = torch.full(
136
+ (num_examples,), task_id, dtype=torch.int32, device=features['input_ids'].device
137
+ )
138
+
139
+ lora_arguments = (
140
+ {"adapter_mask": adapter_mask} if adapter_mask is not None else {}
141
+ )
142
+ features.pop('prompt_length', None)
143
+ output_states = self.auto_model.forward(**features, **lora_arguments, return_dict=False)
144
+ output_tokens = output_states[0]
145
+ features.update({"token_embeddings": output_tokens, "attention_mask": features["attention_mask"]})
146
+ return features
147
+
148
+ def get_word_embedding_dimension(self) -> int:
149
+ return self.auto_model.config.hidden_size
150
+
151
+ def tokenize(
152
+ self,
153
+ texts: Union[List[str], List[dict], List[Tuple[str, str]]],
154
+ padding: Union[str, bool] = True
155
+ ) -> Dict[str, torch.Tensor]:
156
+ """Tokenizes a text and maps tokens to token-ids"""
157
+ output = {}
158
+ if isinstance(texts[0], str):
159
+ to_tokenize = [texts]
160
+ elif isinstance(texts[0], dict):
161
+ to_tokenize = []
162
+ output["text_keys"] = []
163
+ for lookup in texts:
164
+ text_key, text = next(iter(lookup.items()))
165
+ to_tokenize.append(text)
166
+ output["text_keys"].append(text_key)
167
+ to_tokenize = [to_tokenize]
168
+ else:
169
+ batch1, batch2 = [], []
170
+ for text_tuple in texts:
171
+ batch1.append(text_tuple[0])
172
+ batch2.append(text_tuple[1])
173
+ to_tokenize = [batch1, batch2]
174
+
175
+ # strip
176
+ to_tokenize = [[str(s).strip() for s in col] for col in to_tokenize]
177
+
178
+ # Lowercase
179
+ if self.do_lower_case:
180
+ to_tokenize = [[s.lower() for s in col] for col in to_tokenize]
181
+
182
+ output.update(
183
+ self.tokenizer(
184
+ *to_tokenize,
185
+ padding=padding,
186
+ truncation="longest_first",
187
+ return_tensors="pt",
188
+ max_length=self.max_seq_length,
189
+ )
190
+ )
191
+ return output
192
+
193
+ def get_config_dict(self) -> Dict[str, Any]:
194
+ return {key: self.__dict__[key] for key in self.config_keys}
195
+
196
+ def save(self, output_path: str, safe_serialization: bool = True) -> None:
197
+ self.auto_model.save_pretrained(output_path, safe_serialization=safe_serialization)
198
+ self.tokenizer.save_pretrained(output_path)
199
+
200
+ with open(os.path.join(output_path, "sentence_bert_config.json"), "w") as fOut:
201
+ json.dump(self.get_config_dict(), fOut, indent=2)
202
+
203
+
204
+ @classmethod
205
+ def load(cls, input_path: str) -> "Transformer":
206
+ # Old classes used other config names than 'sentence_bert_config.json'
207
+ for config_name in [
208
+ "sentence_bert_config.json",
209
+ "sentence_roberta_config.json",
210
+ "sentence_distilbert_config.json",
211
+ "sentence_camembert_config.json",
212
+ "sentence_albert_config.json",
213
+ "sentence_xlm-roberta_config.json",
214
+ "sentence_xlnet_config.json",
215
+ ]:
216
+ sbert_config_path = os.path.join(input_path, config_name)
217
+ if os.path.exists(sbert_config_path):
218
+ break
219
+
220
+ with open(sbert_config_path) as fIn:
221
+ config = json.load(fIn)
222
+ # Don't allow configs to set trust_remote_code
223
+ if "model_args" in config and "trust_remote_code" in config["model_args"]:
224
+ config["model_args"].pop("trust_remote_code")
225
+ if "tokenizer_args" in config and "trust_remote_code" in config["tokenizer_args"]:
226
+ config["tokenizer_args"].pop("trust_remote_code")
227
+ if "config_args" in config and "trust_remote_code" in config["config_args"]:
228
+ config["config_args"].pop("trust_remote_code")
229
+ return cls(model_name_or_path=input_path, **config)
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b769c8342fa64b3c66cba1e268fe931d8a3b498879e58bc2518cd4bff76e5a02
3
+ size 1144685320
modules.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "transformer",
5
+ "path": "",
6
+ "type": "custom_st.Transformer",
7
+ "kwargs": [
8
+ "task"
9
+ ]
10
+ },
11
+ {
12
+ "idx": 1,
13
+ "name": "pooler",
14
+ "path": "1_Pooling",
15
+ "type": "sentence_transformers.models.Pooling"
16
+ },
17
+ {
18
+ "idx": 2,
19
+ "name": "normalizer",
20
+ "path": "2_Normalize",
21
+ "type": "sentence_transformers.models.Normalize"
22
+ }
23
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 2048,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1d534111404e811687a1580c2d38bdfc5323412a9d9d14d5d92882255b98d07
3
+ size 17082988
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "extra_special_tokens": {},
49
+ "mask_token": "<mask>",
50
+ "max_length": 2048,
51
+ "model_max_length": 2048,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "<pad>",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "</s>",
57
+ "stride": 0,
58
+ "tokenizer_class": "XLMRobertaTokenizerFast",
59
+ "truncation_side": "right",
60
+ "truncation_strategy": "longest_first",
61
+ "unk_token": "<unk>"
62
+ }