travail-emploi / README.md
FaheemBEG's picture
Update README.md
4413204 verified
metadata
language:
  - fr
tags:
  - france
  - travail
  - emploi
  - embeddings
  - open-data
  - government
pretty_name: French Minister of Labor and Employment's website Dataset (Travail Emploi)
size_categories:
  - 1K<n<10K
license: etalab-2.0

๐Ÿ‡ซ๐Ÿ‡ท Travail Emploi website Dataset (French Minister of Labor and Employment)

This dataset is a processed and embedded version of public practical information sheets extracted from the official website of Ministรจre du Travail et de lโ€™Emploi (Minister of Labor and Employment): travail-emploi.gouv.fr. These datas are downloaded from the government Social Gouv GitHub repository.

The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures. These chunks have been vectorized using the BAAI/bge-m3 embedding model to enable semantic search and retrieval tasks.


๐Ÿ—‚๏ธ Dataset Contents

The dataset is provided in Parquet format and includes the following columns:

Column Name Type Description
chunk_id str Unique generated and encoded hash of each chunk.
sid str Article identifier from the source site.
chunk_index int Index of the chunk within its original article.
title str Title of the article.
surtitre str Broader theme. (always "Travail-Emploi" in this dataset).
source str Dataset source label. (always "travail-emploi" in this dataset)
introduction str Introductory paragraph of the article.
date str Publication or last update date (format: DD/MM/YYYY).
url str URL of the original article.
context list[str] Section names related to the chunk.
text str Textual content extracted and chunked from a section of the article.
chunk_text str Formated text including title, context, introduction and text values for embedding
embeddings_bge-m3 str Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON array string.

๐Ÿ› ๏ธ Data Processing Methodology

๐Ÿ“ฅ 1. Field Extraction

The following fields were extracted and/or transformed from the original JSON:

  • Basic fields: sid (i.e. 'pubID'), title, introduction (i.e. 'intro'), date, url are directly extracted from JSON attributes.
  • Generated fields:
    • chunk_id: is an unique generated and encoded hash for each chunk.
    • chunk_index: is the index of the chunk of a same article. Each article has an unique sid.
    • source: is always "travail-emploi" here.
    • surtitre: is always "Travail-Emploi" here.
  • Textual fields:
    • context: Optional contextual hierarchy (e.g., nested sections).
    • text: Textual content of the article chunk. This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a same sid.

Columns source and surtitre are fixed variables here because this dataset was built at the same time as the Service Public dataset. Both datasets were intended to be grouped together in a single vector collection, they then have differents source and surtitre values.

โœ‚๏ธ 2. Generation of 'chunk_text'

The value includes the title and introduction of the article, the context values of the chunk and the textual content chunk text. This strategy is designed to improve semantic search for document search use cases on administrative procedures.

The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks. The parameters used are :

  • chunk_size = 1500 (in order to maximize the compability of most LLMs context windows)
  • chunk_overlap = 20
  • length_function = len

๐Ÿง  3. Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.

๐Ÿ“Œ Embedding Use Notice

โš ๏ธ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :

import pandas as pd
import json

df = pd.read_parquet("travail_emploi.parquet")
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

๐Ÿ“š Source & License

๐Ÿ”— Source :

๐Ÿ“„ Licence :

Open License (Etalab) โ€” This dataset is publicly available and can be reused under the conditions of the Etalab open license.