Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ tags:
|
|
8 |
- embeddings
|
9 |
- open-data
|
10 |
- government
|
11 |
-
pretty_name:
|
12 |
size_categories:
|
13 |
- 1K<n<10K
|
14 |
license: etalab-2.0
|
@@ -42,7 +42,7 @@ The dataset is provided in **Parquet format** and includes the following columns
|
|
42 |
| `context` | `list[str]` | Section names related to the chunk. |
|
43 |
| `text` | `str` | Textual content extracted and chunked from a section of the article. |
|
44 |
| `chunk_text` | `str` | Formated text including `title`, `context`, `introduction` and `text` values for embedding |
|
45 |
-
| `
|
46 |
|
47 |
---
|
48 |
## 🛠️ Data Processing Methodology
|
@@ -75,13 +75,13 @@ The Langchain's `RecursiveCharacterTextSplitter` function was used to make these
|
|
75 |
- `chunk_overlap` = 20
|
76 |
- `length_function` = len
|
77 |
|
78 |
-
### 🧠 3.
|
79 |
|
80 |
-
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `
|
81 |
|
82 |
## 📌 Embedding Use Notice
|
83 |
|
84 |
-
⚠️ The `
|
85 |
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :
|
86 |
|
87 |
```python
|
@@ -89,7 +89,7 @@ import pandas as pd
|
|
89 |
import json
|
90 |
|
91 |
df = pd.read_parquet("travail_emploi.parquet")
|
92 |
-
df["
|
93 |
```
|
94 |
|
95 |
## 📚 Source & License
|
|
|
8 |
- embeddings
|
9 |
- open-data
|
10 |
- government
|
11 |
+
pretty_name: French Minister of Labor and Employment's website Dataset (Travail Emploi)
|
12 |
size_categories:
|
13 |
- 1K<n<10K
|
14 |
license: etalab-2.0
|
|
|
42 |
| `context` | `list[str]` | Section names related to the chunk. |
|
43 |
| `text` | `str` | Textual content extracted and chunked from a section of the article. |
|
44 |
| `chunk_text` | `str` | Formated text including `title`, `context`, `introduction` and `text` values for embedding |
|
45 |
+
| `embeddings_bge-m3` | `str` | Embedding vector of `chunk_text` using `BAAI/bge-m3`, stored as JSON array string. |
|
46 |
|
47 |
---
|
48 |
## 🛠️ Data Processing Methodology
|
|
|
75 |
- `chunk_overlap` = 20
|
76 |
- `length_function` = len
|
77 |
|
78 |
+
### 🧠 3. Embeddings Generation
|
79 |
|
80 |
+
Each `chunk_text` was embedded using the [**`BAAI/bge-m3`**](https://huggingface.co/BAAI/bge-m3) model. The resulting embedding vector is stored in the `embeddings_bge-m3` column as a **string**, but can easily be parsed back into a `list[float]` or NumPy array.
|
81 |
|
82 |
## 📌 Embedding Use Notice
|
83 |
|
84 |
+
⚠️ The `embeddings_bge-m3` column is stored as a **stringified list** of floats (e.g., `"[-0.03062629,-0.017049594,...]"`).
|
85 |
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :
|
86 |
|
87 |
```python
|
|
|
89 |
import json
|
90 |
|
91 |
df = pd.read_parquet("travail_emploi.parquet")
|
92 |
+
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
|
93 |
```
|
94 |
|
95 |
## 📚 Source & License
|