ldwang commited on
Commit
bd01d07
1 Parent(s): 1b543b3
Files changed (1) hide show
  1. README.md +442 -24
README.md CHANGED
@@ -1,19 +1,25 @@
1
- ---
2
- license: mit
3
- language:
4
- - zh
5
- pipeline_tag: sentence-similarity
6
- tags:
7
- - sentence-transformers
8
- ---
9
 
10
 
11
  <h1 align="center">FlagEmbedding</h1>
12
-
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  <h4 align="center">
15
  <p>
16
  <a href=#model-list>Model List</a> |
 
17
  <a href=#usage>Usage</a> |
18
  <a href="#evaluation">Evaluation</a> |
19
  <a href="#train">Train</a> |
@@ -22,7 +28,6 @@ tags:
22
  <p>
23
  </h4>
24
 
25
- More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
26
 
27
  [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
28
 
@@ -30,9 +35,9 @@ FlagEmbedding can map any text to a low-dimensional dense vector which can be us
30
  And it also can be used in vector database for LLMs.
31
 
32
  ************* 🌟**Updates**🌟 *************
33
- - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [**this**](#using-langchain); C-MTEB **leaderboard** is [avaliable](https://huggingface.co/spaces/mteb/leaderboard).
34
  - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
35
- - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!**
36
  - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
37
 
38
 
@@ -42,16 +47,42 @@ And it also can be used in vector database for LLMs.
42
 
43
  | Model | Language | Description | query instruction for retrieval\* |
44
  |:-------------------------------|:--------:| :--------:| :--------:|
45
- | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
46
  | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | rank **2nd** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
47
  | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
48
- | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
49
  | [BAAI/bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | Chinese | This model is trained without instruction, and rank **2nd** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | |
50
  | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | a base-scale model but has similar ability with `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
51
  | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
52
 
53
  \*: If you need to search the **long** relevant passages to a **short** query (s2p retrieval task), you need to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** need to be added to passages.
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ## Usage
56
 
57
  Here are some examples to use `bge` models with
@@ -65,10 +96,11 @@ If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagO
65
 
66
  ```python
67
  from FlagEmbedding import FlagModel
68
- sentences = ["样例数据-1", "样例数据-2"]
 
69
  model = FlagModel('BAAI/bge-large-zh', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
70
- embeddings_1 = model.encode(sentences)
71
- embeddings_2 = model.encode(sentences)
72
  similarity = embeddings_1 @ embeddings_2.T
73
  print(similarity)
74
 
@@ -83,6 +115,7 @@ scores = q_embeddings @ p_embeddings.T
83
  The value of argument `query_instruction_for_retrieval` see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
84
 
85
  FlagModel will use all available GPUs when encoding, please set `os.environ["CUDA_VISIBLE_DEVICES"]` to choose GPU.
 
86
 
87
 
88
  #### Using Sentence-Transformers
@@ -94,10 +127,11 @@ pip install -U sentence-transformers
94
  ```
95
  ```python
96
  from sentence_transformers import SentenceTransformer
97
- sentences = ["样例数据-1", "样例数据-2"]
 
98
  model = SentenceTransformer('BAAI/bge-large-zh')
99
- embeddings_1 = model.encode(sentences, normalize_embeddings=True)
100
- embeddings_2 = model.encode(sentences, normalize_embeddings=True)
101
  similarity = embeddings_1 @ embeddings_2.T
102
  print(similarity)
103
  ```
@@ -124,10 +158,11 @@ from langchain.embeddings import HuggingFaceBgeEmbeddings
124
  model_name = "BAAI/bge-small-en"
125
  model_kwargs = {'device': 'cuda'}
126
  encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
127
- model_norm = HuggingFaceBgeEmbeddings(
128
  model_name=model_name,
129
  model_kwargs=model_kwargs,
130
- encode_kwargs=encode_kwargs
 
131
  )
132
  ```
133
 
@@ -239,7 +274,7 @@ Besides the negative in the triple, we also adopt in-batch negatives strategy.
239
  We employ the cross-device negatives sharing method to share negatives among different GPUs,
240
  which can dramatically **increase the number of negatives**.
241
 
242
- We trained our model on 48 A100(40G) GPUs with a large batch size of 32,768 (so there are **65,535** negatives for each query in a batch).
243
  We used the AdamW optimizer and the learning rate is 1e-5.
244
  The temperature for contrastive loss is 0.01.
245
 
@@ -256,17 +291,400 @@ You can easily finetune your model with it.
256
 
257
  - For English, we collect 230M text pairs from [wikipedia](https://huggingface.co/datasets/wikipedia), [cc-net](https://github.com/facebookresearch/cc_net), and so on.
258
 
259
- - For chinese, we collect 120M text pairs from [wudao](https://github.com/BAAI-WuDao/Data), [simclue](https://github.com/CLUEbenchmark/SimCLUE) and so on.
260
 
261
  **The data collection is to be released in the future.**
262
 
 
 
 
 
 
 
 
 
 
263
  We will continually update the embedding models and training codes,
264
  hoping to promote the development of the embedding model community.
265
 
266
 
 
 
 
 
267
 
268
  ## License
269
  FlagEmbedding is licensed under [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
270
 
271
 
272
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
 
3
  <h1 align="center">FlagEmbedding</h1>
4
+ <p align="center">
5
+ <a href="https://github.com/FlagOpen/FlagEmbedding">
6
+ <img alt="Build" src="https://img.shields.io/badge/Contribution-Welcome-blue">
7
+ </a>
8
+ <a href="https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE">
9
+ <img alt="License" src="https://img.shields.io/badge/LICENSE-MIT-green">
10
+ </a>
11
+ <a href="https://huggingface.co/C-MTEB">
12
+ <img alt="Build" src="https://img.shields.io/badge/C_MTEB-🤗-yellow">
13
+ </a>
14
+ <a href="https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding">
15
+ <img alt="Build" src="https://img.shields.io/badge/FlagEmbedding-1.0-red">
16
+ </a>
17
+ </p>
18
 
19
  <h4 align="center">
20
  <p>
21
  <a href=#model-list>Model List</a> |
22
+ <a href=#frequently-asked-questions>FAQ</a> |
23
  <a href=#usage>Usage</a> |
24
  <a href="#evaluation">Evaluation</a> |
25
  <a href="#train">Train</a> |
 
28
  <p>
29
  </h4>
30
 
 
31
 
32
  [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
33
 
 
35
  And it also can be used in vector database for LLMs.
36
 
37
  ************* 🌟**Updates**🌟 *************
38
+ - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [avaliable](https://huggingface.co/spaces/mteb/leaderboard).
39
  - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
40
+ - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
41
  - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
42
 
43
 
 
47
 
48
  | Model | Language | Description | query instruction for retrieval\* |
49
  |:-------------------------------|:--------:| :--------:| :--------:|
50
+ | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
51
  | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | rank **2nd** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
52
  | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
53
+ | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
54
  | [BAAI/bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | Chinese | This model is trained without instruction, and rank **2nd** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | |
55
  | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | a base-scale model but has similar ability with `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
56
  | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
57
 
58
  \*: If you need to search the **long** relevant passages to a **short** query (s2p retrieval task), you need to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** need to be added to passages.
59
 
60
+ ## Frequently asked questions
61
+
62
+ 1. The similarity score between two dissimilar sentence is higher than 0.5
63
+
64
+ The similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
65
+ So a similarity score greater than 0.5 does not indicate that the two sentence are similar.
66
+
67
+ For downstream tasks, such as passage retrieval or semantic similarity,
68
+ **what matters is the relative order of the scores, not the absolute value.**
69
+ If you need to filter similar sentences based on a similarity threshold,
70
+ please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
71
+
72
+
73
+ 2. When do the query instruction need to be used
74
+
75
+ For a retrieval task that uses short queries to find long related documents,
76
+ it is recommended to add instructions for these short queries.
77
+ For other tasks, it is recommended not to add instructions.
78
+ For example, in Quora task, which needs to use a short question to search another related short questions,
79
+ the instruction is not recommended to add.
80
+ The best method to decide whether to add instructions for queries is choosing the setting which can achieve better performance in your task.
81
+ In all cases, the documents/passages do not need to add the instruction, only need to consider whether to add the instruction for queries.
82
+
83
+
84
+
85
+
86
  ## Usage
87
 
88
  Here are some examples to use `bge` models with
 
96
 
97
  ```python
98
  from FlagEmbedding import FlagModel
99
+ sentences_1 = ["样例数据-1", "样例数据-2"]
100
+ sentences_2 = ["样例数据-3", "样例数据-4"]
101
  model = FlagModel('BAAI/bge-large-zh', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
102
+ embeddings_1 = model.encode(sentences_1)
103
+ embeddings_2 = model.encode(sentences_2)
104
  similarity = embeddings_1 @ embeddings_2.T
105
  print(similarity)
106
 
 
115
  The value of argument `query_instruction_for_retrieval` see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
116
 
117
  FlagModel will use all available GPUs when encoding, please set `os.environ["CUDA_VISIBLE_DEVICES"]` to choose GPU.
118
+ You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make GPUs unavailable.
119
 
120
 
121
  #### Using Sentence-Transformers
 
127
  ```
128
  ```python
129
  from sentence_transformers import SentenceTransformer
130
+ sentences_1 = ["样例数据-1", "样例数据-2"]
131
+ sentences_2 = ["样例数据-3", "样例数据-4"]
132
  model = SentenceTransformer('BAAI/bge-large-zh')
133
+ embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
134
+ embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
135
  similarity = embeddings_1 @ embeddings_2.T
136
  print(similarity)
137
  ```
 
158
  model_name = "BAAI/bge-small-en"
159
  model_kwargs = {'device': 'cuda'}
160
  encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
161
+ model = HuggingFaceBgeEmbeddings(
162
  model_name=model_name,
163
  model_kwargs=model_kwargs,
164
+ encode_kwargs=encode_kwargs,
165
+ query_instruction="为这个句子生成表示以用于检索相关文章:"
166
  )
167
  ```
168
 
 
274
  We employ the cross-device negatives sharing method to share negatives among different GPUs,
275
  which can dramatically **increase the number of negatives**.
276
 
277
+ We trained our model on 48 A100(40G) GPUs with a large batch size of 32,784 (so there are **65,567** negatives for each query in a batch).
278
  We used the AdamW optimizer and the learning rate is 1e-5.
279
  The temperature for contrastive loss is 0.01.
280
 
 
291
 
292
  - For English, we collect 230M text pairs from [wikipedia](https://huggingface.co/datasets/wikipedia), [cc-net](https://github.com/facebookresearch/cc_net), and so on.
293
 
294
+ - For chinese, we collect 120M text pairs from [wudao](https://github.com/BAAI-WuDao/Data), [simclue](https://github.com/CLUEbenchmark/SimCLUE), and so on.
295
 
296
  **The data collection is to be released in the future.**
297
 
298
+
299
+ ## Schedule
300
+ - [x] Chinese Massive Text Embedding Benchmark
301
+ - [x] release baai-general-embedding models
302
+ - [x] release codes for training
303
+ - [ ] Multilingual model
304
+ - [ ] Training Datasets
305
+ - [ ] ...
306
+
307
  We will continually update the embedding models and training codes,
308
  hoping to promote the development of the embedding model community.
309
 
310
 
311
+ ## Contact
312
+ If you have any question or suggestion related to this project, feel free to open an issue or pull a request.
313
+ You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
314
+
315
 
316
  ## License
317
  FlagEmbedding is licensed under [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
318
 
319
 
320
 
321
+
322
+
323
+ <h1 align="center">FlagEmbedding</h1>
324
+
325
+
326
+ <h4 align="center">
327
+ <p>
328
+ <a href=#model-list>Model List</a> |
329
+ <a href=#usage>Usage</a> |
330
+ <a href="#evaluation">Evaluation</a> |
331
+ <a href="#train">Train</a> |
332
+ <a href="#contact">Contact</a> |
333
+ <a href="#license">License</a>
334
+ <p>
335
+ </h4>
336
+
337
+ More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
338
+
339
+
340
+
341
+ [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
342
+
343
+ FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
344
+ And it also can be used in vector databases for LLMs.
345
+
346
+ ************* 🌟**Updates**🌟 *************
347
+ - 09/12/2023: New Release:
348
+ - **New reranker model**: release a cross-encoder model bge-reranker-base, which is more powerful than embedding model. We recommend to use/fine-tune it to re-rank top-k documents returned by embedding models.
349
+ - **update embedding model**: release bge-*-v1.5 embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
350
+ - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
351
+ - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
352
+ - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
353
+ - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
354
+ - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
355
+
356
+
357
+ ## Model List
358
+
359
+ `bge` is short for `BAAI general embedding`.
360
+
361
+ | Model | Language | | Description | query instruction for retrieval\* |
362
+ |:-------------------------------|:--------:| :--------:| :--------:|:--------:|
363
+ | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
364
+ | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
365
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
366
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
367
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
368
+ | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
369
+ | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
370
+ | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
371
+ | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
372
+ | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
373
+ | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
374
+ | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
375
+ | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
376
+ | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
377
+
378
+
379
+ \*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
380
+
381
+ \**: To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
382
+ For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
383
+
384
+
385
+ ## Frequently asked questions
386
+
387
+ <details>
388
+ <summary>1. How to fine-tune bge embedding model?</summary>
389
+
390
+ <!-- ### How to fine-tune bge embedding model? -->
391
+ Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
392
+ Some suggestions:
393
+ - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#data-format), which can improve the retrieval performance.
394
+ - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
395
+ - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
396
+
397
+
398
+ </details>
399
+
400
+ <details>
401
+ <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
402
+
403
+ <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
404
+ **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
405
+
406
+ Since we finetune the models by contrastive learning with a temperature of 0.01,
407
+ the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
408
+ So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
409
+
410
+ For downstream tasks, such as passage retrieval or semantic similarity,
411
+ **what matters is the relative order of the scores, not the absolute value.**
412
+ If you need to filter similar sentences based on a similarity threshold,
413
+ please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
414
+
415
+ </details>
416
+
417
+ <details>
418
+ <summary>3. When does the query instruction need to be used</summary>
419
+
420
+ <!-- ### When does the query instruction need to be used -->
421
+
422
+ For a retrieval task that uses short queries to find long related documents,
423
+ it is recommended to add instructions for these short queries.
424
+ **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
425
+ In all cases, the documents/passages do not need to add the instruction.
426
+
427
+ </details>
428
+
429
+
430
+ ## Usage
431
+
432
+ ### Usage for Embedding Model
433
+
434
+ Here are some examples for using `bge` models with
435
+ [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
436
+
437
+ #### Using FlagEmbedding
438
+ ```
439
+ pip install -U FlagEmbedding
440
+ ```
441
+ If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
442
+
443
+ ```python
444
+ from FlagEmbedding import FlagModel
445
+ sentences_1 = ["样例数据-1", "样例数据-2"]
446
+ sentences_2 = ["样例数据-3", "样例数据-4"]
447
+ model = FlagModel('BAAI/bge-large-zh', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
448
+ embeddings_1 = model.encode(sentences_1)
449
+ embeddings_2 = model.encode(sentences_2)
450
+ similarity = embeddings_1 @ embeddings_2.T
451
+ print(similarity)
452
+
453
+ # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
454
+ # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
455
+ queries = ['query_1', 'query_2']
456
+ passages = ["样例文档-1", "样例文档-2"]
457
+ q_embeddings = model.encode_queries(queries)
458
+ p_embeddings = model.encode(passages)
459
+ scores = q_embeddings @ p_embeddings.T
460
+ ```
461
+ For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
462
+
463
+ By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
464
+ You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
465
+
466
+
467
+ #### Using Sentence-Transformers
468
+
469
+ You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
470
+
471
+ ```
472
+ pip install -U sentence-transformers
473
+ ```
474
+ ```python
475
+ from sentence_transformers import SentenceTransformer
476
+ sentences_1 = ["样例数据-1", "样例数据-2"]
477
+ sentences_2 = ["样例数据-3", "样例数据-4"]
478
+ model = SentenceTransformer('BAAI/bge-large-zh')
479
+ embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
480
+ embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
481
+ similarity = embeddings_1 @ embeddings_2.T
482
+ print(similarity)
483
+ ```
484
+ For s2p(short query to long passage) retrieval task,
485
+ each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
486
+ But the instruction is not needed for passages.
487
+ ```python
488
+ from sentence_transformers import SentenceTransformer
489
+ queries = ['query_1', 'query_2']
490
+ passages = ["样例文档-1", "样例文档-2"]
491
+ instruction = "为这个句子生成表示以用于检索相关文章:"
492
+
493
+ model = SentenceTransformer('BAAI/bge-large-zh')
494
+ q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
495
+ p_embeddings = model.encode(passages, normalize_embeddings=True)
496
+ scores = q_embeddings @ p_embeddings.T
497
+ ```
498
+
499
+ #### Using Langchain
500
+
501
+ You can use `bge` in langchain like this:
502
+ ```python
503
+ from langchain.embeddings import HuggingFaceBgeEmbeddings
504
+ model_name = "BAAI/bge-small-en"
505
+ model_kwargs = {'device': 'cuda'}
506
+ encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
507
+ model = HuggingFaceBgeEmbeddings(
508
+ model_name=model_name,
509
+ model_kwargs=model_kwargs,
510
+ encode_kwargs=encode_kwargs,
511
+ query_instruction="为这个句子生成表示以用于检索相关文章:"
512
+ )
513
+ model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
514
+ ```
515
+
516
+
517
+ #### Using HuggingFace Transformers
518
+
519
+ With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
520
+
521
+ ```python
522
+ from transformers import AutoTokenizer, AutoModel
523
+ import torch
524
+ # Sentences we want sentence embeddings for
525
+ sentences = ["样例数据-1", "样例数据-2"]
526
+
527
+ # Load model from HuggingFace Hub
528
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh')
529
+ model = AutoModel.from_pretrained('BAAI/bge-large-zh')
530
+ model.eval()
531
+
532
+ # Tokenize sentences
533
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
534
+ # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
535
+ # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
536
+
537
+ # Compute token embeddings
538
+ with torch.no_grad():
539
+ model_output = model(**encoded_input)
540
+ # Perform pooling. In this case, cls pooling.
541
+ sentence_embeddings = model_output[0][:, 0]
542
+ # normalize embeddings
543
+ sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
544
+ print("Sentence embeddings:", sentence_embeddings)
545
+ ```
546
+
547
+ ### Usage for Reranker
548
+
549
+ You can get a relevance score by inputting query and passage to the reranker.
550
+ The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
551
+
552
+
553
+ #### Using FlagEmbedding
554
+ ```
555
+ pip install -U FlagEmbedding
556
+ ```
557
+
558
+ Get relevance score:
559
+ ```python
560
+ from FlagEmbedding import FlagReranker
561
+ reranker = FlagReranker('BAAI/bge-reranker-base', use_fp16=True) #use fp16 can speed up computing
562
+
563
+ score = reranker.compute_score(['query', 'passage'])
564
+ print(score)
565
+
566
+ scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
567
+ print(scores)
568
+ ```
569
+
570
+
571
+ #### Using Huggingface transformers
572
+
573
+ ```python
574
+ import torch
575
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer, BatchEncoding, PreTrainedTokenizerFast
576
+
577
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-base')
578
+ model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-base')
579
+ model.eval()
580
+
581
+ pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
582
+ with torch.no_grad():
583
+ inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
584
+ scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
585
+ print(scores)
586
+ ```
587
+
588
+ ## Evaluation
589
+
590
+ `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
591
+ For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
592
+
593
+ - **MTEB**:
594
+
595
+ | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
596
+ |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
597
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
598
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
599
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
600
+ | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
601
+ | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
602
+ | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
603
+ | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
604
+ | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
605
+ | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
606
+ | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
607
+ | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
608
+ | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
609
+ | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
610
+ | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
611
+ | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
612
+ | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
613
+ | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
614
+
615
+
616
+
617
+ - **C-MTEB**:
618
+ We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
619
+ Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
620
+
621
+ | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
622
+ |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
623
+ | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
624
+ | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
625
+ | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
626
+ | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
627
+ | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
628
+ | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
629
+ | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
630
+ | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
631
+ | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
632
+ | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
633
+ | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
634
+ | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
635
+ | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
636
+ | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
637
+ | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
638
+ | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
639
+
640
+
641
+ - **Reranking**:
642
+ See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
643
+
644
+ | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MmarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
645
+ |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
646
+ | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
647
+ | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
648
+ | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
649
+ | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
650
+ | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
651
+ | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
652
+ | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
653
+ | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
654
+ | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
655
+ | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
656
+
657
+ \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval task
658
+
659
+ ## Train
660
+
661
+ ### BAAI Embedding
662
+
663
+ We pre-train the models using retromae and train them on large-scale pairs data using contrastive learning.
664
+ **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
665
+ We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
666
+ Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
667
+ More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
668
+
669
+
670
+
671
+ ### BGE Reranker
672
+
673
+ Cross-encoder will perform full-attention over the input pair,
674
+ which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
675
+ Therefore, it can be used to re-rank the top-k documents returned by embedding model.
676
+ We train the cross-encoder on a multilingual pair data,
677
+ The data format is the same as embedding model, so you can fine-tune it easily following our example.
678
+ More details pelease refer to [./FlagEmbedding/reranker/README.md](./FlagEmbedding/reranker/README.md)
679
+
680
+
681
+ ## Contact
682
+ If you have any question or suggestion related to this project, feel free to open an issue or pull request.
683
+ You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
684
+
685
+
686
+ ## License
687
+ FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
688
+
689
+
690
+