yuchenlin commited on
Commit
e5cf043
1 Parent(s): dd96c62

Update BM25S model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ corpus.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ library_name: bm25s
4
+ tags:
5
+ - bm25
6
+ - bm25s
7
+ - retrieval
8
+ - search
9
+ - lexical
10
+ ---
11
+
12
+ # BM25S Index
13
+
14
+ This is a BM25S index created with the [`bm25s` library](https://github.com/xhluca/bm25s) (version `0.1.7`), an ultra-fast implementation of BM25. It can be used for lexical retrieval tasks.
15
+
16
+ 💻[BM25S GitHub Repository](https://github.com/xhluca/bm25s)\
17
+ 🌐[BM25S Homepage](https://bm25s.github.io)
18
+
19
+ ## Installation
20
+
21
+ You can install the `bm25s` library with `pip`:
22
+
23
+ ```bash
24
+ pip install "bm25s==0.1.7"
25
+
26
+ # Include extra dependencies like stemmer
27
+ pip install "bm25s[full]==0.1.7"
28
+
29
+ # For huggingface hub usage
30
+ pip install huggingface_hub
31
+ ```
32
+
33
+ ## Loading a `bm25s` index
34
+
35
+ You can use this index for information retrieval tasks. Here is an example:
36
+
37
+ ```python
38
+ import bm25s
39
+ from bm25s.hf import BM25HF
40
+
41
+ # Load the index
42
+ retriever = BM25HF.load_from_hub("yuchenlin/BM25S_index_Llama-3-Magpie-Pro-1M-v0.1")
43
+
44
+ # You can retrieve now
45
+ query = "a cat is a feline"
46
+ results = retriever.retrieve(bm25s.tokenize(query), k=3)
47
+ ```
48
+
49
+ ## Saving a `bm25s` index
50
+
51
+ You can save a `bm25s` index to the Hugging Face Hub. Here is an example:
52
+
53
+ ```python
54
+ import bm25s
55
+ from bm25s.hf import BM25HF
56
+
57
+ corpus = [
58
+ "a cat is a feline and likes to purr",
59
+ "a dog is the human's best friend and loves to play",
60
+ "a bird is a beautiful animal that can fly",
61
+ "a fish is a creature that lives in water and swims",
62
+ ]
63
+
64
+ retriever = BM25HF(corpus=corpus)
65
+ retriever.index(bm25s.tokenize(corpus))
66
+
67
+ token = None # You can get a token from the Hugging Face website
68
+ retriever.save_to_hub("yuchenlin/BM25S_index_Llama-3-Magpie-Pro-1M-v0.1", token=token)
69
+ ```
70
+
71
+ ## Advanced usage
72
+
73
+ You can leverage more advanced features of the BM25S library during `load_from_hub`:
74
+
75
+ ```python
76
+ # Load corpus and index in memory-map (mmap=True) to reduce memory
77
+ retriever = BM25HF.load_from_hub("yuchenlin/BM25S_index_Llama-3-Magpie-Pro-1M-v0.1", load_corpus=True, mmap=True)
78
+
79
+ # Load a different branch/revision
80
+ retriever = BM25HF.load_from_hub("yuchenlin/BM25S_index_Llama-3-Magpie-Pro-1M-v0.1", revision="main")
81
+
82
+ # Change directory where the local files should be downloaded
83
+ retriever = BM25HF.load_from_hub("yuchenlin/BM25S_index_Llama-3-Magpie-Pro-1M-v0.1", local_dir="/path/to/dir")
84
+
85
+ # Load private repositories with a token:
86
+ retriever = BM25HF.load_from_hub("yuchenlin/BM25S_index_Llama-3-Magpie-Pro-1M-v0.1", token=token)
87
+ ```
88
+
89
+ ## Stats
90
+
91
+ This dataset was created using the following data:
92
+
93
+ | Statistic | Value |
94
+ | --- | --- |
95
+ | Number of documents | 1000000 |
96
+ | Number of tokens | 8343647 |
97
+ | Average tokens per document | 8.34 |
98
+
99
+ ## Parameters
100
+
101
+ The index was created with the following parameters:
102
+
103
+ | Parameter | Value |
104
+ | --- | --- |
105
+ | k1 | `1.5` |
106
+ | b | `0.75` |
107
+ | delta | `0.5` |
108
+ | method | `lucene` |
109
+ | idf method | `lucene` |
corpus.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dedb8f127bc5ae7f520c88ea128375cdfa7b795fb3b653a749a6e8fc581a8914
3
+ size 99478681
corpus.mmindex.json ADDED
The diff for this file is too large to render. See raw diff
 
data.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bb01f19b6579a60076c88379941d3b5c334c8b9b3c3fa78ee8e5637f86201e0
3
+ size 33374716
indices.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65f269ce58e2f4dd80fd1272f5186a4557d6e74e1fe4e750851c4dd243d5c8e4
3
+ size 33374716
indptr.csc.index.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c1169384fab70b47e8ca8fbcc1c84255d2c8766b9249e457a998ba050f7b1b1
3
+ size 459428
params.index.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "k1": 1.5,
3
+ "b": 0.75,
4
+ "delta": 0.5,
5
+ "method": "lucene",
6
+ "idf_method": "lucene",
7
+ "dtype": "float32",
8
+ "int_dtype": "int32",
9
+ "num_docs": 1000000,
10
+ "version": "0.1.7"
11
+ }
vocab.index.json ADDED
The diff for this file is too large to render. See raw diff