Sentence Similarity
English
txtai
davidmezzetti commited on
Commit
d09bed6
·
1 Parent(s): ffc6fcb

June 2025 data update

Browse files
Files changed (4) hide show
  1. README.md +6 -6
  2. config.json +6 -6
  3. documents +2 -2
  4. embeddings +2 -2
README.md CHANGED
@@ -8,14 +8,14 @@ library_name: txtai
8
  tags:
9
  - sentence-similarity
10
  datasets:
11
- - NeuML/wikipedia-20250123
12
  ---
13
 
14
  # Wikipedia txtai embeddings index
15
 
16
  This is a [txtai](https://github.com/neuml/txtai) embeddings index for the [English edition of Wikipedia](https://en.wikipedia.org/).
17
 
18
- This index is built from the [Wikipedia January 2025 dataset](https://huggingface.co/datasets/neuml/wikipedia-20250123). Only the first paragraph of the [lead section](https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Lead_section) from each article is included in the index. This is similar to an abstract of the article.
19
 
20
  It also uses [Wikipedia Page Views](https://dumps.wikimedia.org/other/pageviews/readme.html) data to add a `percentile` field. The `percentile` field can be used
21
  to only match commonly visited pages.
@@ -65,7 +65,7 @@ Performance was evaluated using the [NDCG@10](https://en.wikipedia.org/wiki/Disc
65
 
66
  ## Build the index
67
 
68
- The following steps show how to build this index. These scripts are using the latest data available as of 2025-01-23, update as appropriate.
69
 
70
  - Install required build dependencies
71
  ```bash
@@ -75,7 +75,7 @@ pip install ragdata mwparserfromhell
75
  - Download and build pageviews database
76
  ```bash
77
  mkdir -p pageviews/data
78
- wget -P pageviews/data https://dumps.wikimedia.org/other/pageview_complete/monthly/2025/2025-01/pageviews-202501-user.bz2
79
  python -m ragdata.wikipedia.views -p en.wikipedia -v pageviews
80
  ```
81
 
@@ -85,7 +85,7 @@ python -m ragdata.wikipedia.views -p en.wikipedia -v pageviews
85
  from datasets import load_dataset
86
 
87
  # Data dump date from https://dumps.wikimedia.org/enwiki/
88
- date = "20250123"
89
 
90
  # Build and save dataset
91
  ds = load_dataset("neuml/wikipedia", language="en", date=date)
@@ -95,7 +95,7 @@ ds.save_to_disk(f"wikipedia-{date}")
95
  - Build txtai-wikipedia index
96
  ```bash
97
  python -m ragdata.wikipedia.index \
98
- -d wikipedia-20250123 \
99
  -o txtai-wikipedia \
100
  -v pageviews/pageviews.sqlite
101
  ```
 
8
  tags:
9
  - sentence-similarity
10
  datasets:
11
+ - NeuML/wikipedia-20250620
12
  ---
13
 
14
  # Wikipedia txtai embeddings index
15
 
16
  This is a [txtai](https://github.com/neuml/txtai) embeddings index for the [English edition of Wikipedia](https://en.wikipedia.org/).
17
 
18
+ This index is built from the [Wikipedia June 2025 dataset](https://huggingface.co/datasets/neuml/wikipedia-20250620). Only the first paragraph of the [lead section](https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Lead_section) from each article is included in the index. This is similar to an abstract of the article.
19
 
20
  It also uses [Wikipedia Page Views](https://dumps.wikimedia.org/other/pageviews/readme.html) data to add a `percentile` field. The `percentile` field can be used
21
  to only match commonly visited pages.
 
65
 
66
  ## Build the index
67
 
68
+ The following steps show how to build this index. These scripts are using the latest data available as of 2025-06-20, update as appropriate.
69
 
70
  - Install required build dependencies
71
  ```bash
 
75
  - Download and build pageviews database
76
  ```bash
77
  mkdir -p pageviews/data
78
+ wget -P pageviews/data https://dumps.wikimedia.org/other/pageview_complete/monthly/2025/2025-06/pageviews-202506-user.bz2
79
  python -m ragdata.wikipedia.views -p en.wikipedia -v pageviews
80
  ```
81
 
 
85
  from datasets import load_dataset
86
 
87
  # Data dump date from https://dumps.wikimedia.org/enwiki/
88
+ date = "20250620"
89
 
90
  # Build and save dataset
91
  ds = load_dataset("neuml/wikipedia", language="en", date=date)
 
95
  - Build txtai-wikipedia index
96
  ```bash
97
  python -m ragdata.wikipedia.index \
98
+ -d wikipedia-20250620 \
99
  -o txtai-wikipedia \
100
  -v pageviews/pageviews.sqlite
101
  ```
config.json CHANGED
@@ -14,15 +14,15 @@
14
  "content": true,
15
  "dimensions": 768,
16
  "backend": "faiss",
17
- "offset": 6332694,
18
  "build": {
19
- "create": "2025-02-10T20:02:33Z",
20
- "python": "3.9.21",
21
  "settings": {
22
- "components": "IVF2251,SQ8"
23
  },
24
  "system": "Linux (x86_64)",
25
- "txtai": "8.3.0"
26
  },
27
- "update": "2025-02-10T20:02:33Z"
28
  }
 
14
  "content": true,
15
  "dimensions": 768,
16
  "backend": "faiss",
17
+ "offset": 6393125,
18
  "build": {
19
+ "create": "2025-07-03T15:11:29Z",
20
+ "python": "3.10.18",
21
  "settings": {
22
+ "components": "IVF2262,SQ8"
23
  },
24
  "system": "Linux (x86_64)",
25
+ "txtai": "8.7.0"
26
  },
27
+ "update": "2025-07-03T15:11:29Z"
28
  }
documents CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0346befb651ac69494d4de63c62096099cf6190019c301f94cca5bcbff57301
3
- size 3405090816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8455e7fc929c97437db0cb8cbef3adf9d5243cdfc272ff2d97c6c48a9892d415
3
+ size 3440136192
embeddings CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:118afed43b574b98db1019f4ef928254e0a04af82f43ad6497e4bd720a808627
3
- size 4921109952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:536aea23237a8c9673980a020b567d1913cc29cbc51b8e6b4a21c4797fedab1b
3
+ size 4968038288