--- size_categories: n<1K tags: - rlfh - argilla - human-feedback configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: id dtype: string - name: _server_id dtype: string - name: text dtype: string - name: label.responses sequence: string - name: label.responses.users sequence: string - name: label.responses.status sequence: string - name: label.suggestion dtype: string - name: label.suggestion.score dtype: 'null' - name: label.suggestion.agent dtype: 'null' - name: topics.suggestion sequence: string - name: topics.suggestion.score sequence: float64 - name: topics.suggestion.agent dtype: 'null' - name: comment.suggestion.agent dtype: 'null' - name: span.suggestion.agent dtype: 'null' - name: comment_score dtype: float64 - name: rating.suggestion.score dtype: 'null' - name: span.suggestion list: - name: end dtype: int64 - name: label dtype: string - name: start dtype: int64 - name: ranking.suggestion.score dtype: 'null' - name: comment.suggestion.score dtype: float64 - name: span.suggestion.score dtype: 'null' - name: ranking.suggestion sequence: string - name: vector sequence: float64 - name: rating.suggestion.agent dtype: 'null' - name: ranking.suggestion.agent dtype: 'null' - name: comment.suggestion dtype: string - name: rating.suggestion dtype: int64 splits: - name: train num_bytes: 1237 num_examples: 4 download_size: 16938 dataset_size: 1237 --- # Dataset Card for test-argilla-dataset This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Using this dataset with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --pre --upgrade` and then use the following code: ```python import argilla as rg ds = rg.Dataset.from_hub("burtenshaw/test-argilla-dataset") ``` This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation. ## Using this dataset with `datasets` To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("burtenshaw/test-argilla-dataset") ``` This will only load the records of the dataset, but not the Argilla settings. ## Dataset Structure This dataset repo contains: * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. * A dataset configuration folder conforming to the Argilla dataset format in `.argilla`. The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. ### Fields The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | text | text | text | True | False | ### Questions The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | label | label | label_selection | True | N/A | ['positive', 'negative'] | | rating | rating | rating | True | N/A | [1, 2, 3, 4, 5] | | ranking | ranking | ranking | True | N/A | ['label1', 'label2', 'label3'] | | comment | comment | text | True | N/A | N/A | | topics | topics | multi_label_selection | True | N/A | ['topic1', 'topic2', 'topic3'] | | span | span | span | True | N/A | N/A | ### Metadata The **metadata** is a dictionary that can be used to provide additional information about the dataset record. | Metadata Name | Title | Type | Values | Visible for Annotators | | ------------- | ----- | ---- | ------ | ---------------------- | | comment_score | comment_score | | None - None | True | ### Vectors The **vectors** contain a vector representation of the record that can be used in search. | Vector Name | Title | Dimensions | |-------------|-------|------------| | vector | vector | [1, 3] | ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "_server_id": "b14b6b41-1d02-4316-8d5b-80c4947be464", "fields": { "text": "Hello World, how are you?" }, "id": "5a036b70-0cb9-450a-82eb-27c5e2ecd3a8", "metadata": {}, "responses": { "label": [ { "user_id": "06f7d4c0-e048-43d2-ab3f-06f147616ac6", "value": "positive" } ] }, "suggestions": { "label": { "agent": null, "score": null, "value": "positive" }, "topics": { "agent": null, "score": [ 0.9, 0.8 ], "value": [ "topic1", "topic2" ] } }, "vectors": {} } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "_server_id": "b14b6b41-1d02-4316-8d5b-80c4947be464", "comment.suggestion": null, "comment.suggestion.agent": null, "comment.suggestion.score": null, "comment_score": null, "id": "5a036b70-0cb9-450a-82eb-27c5e2ecd3a8", "label.responses": [ "positive" ], "label.responses.status": [ "draft" ], "label.responses.users": [ "06f7d4c0-e048-43d2-ab3f-06f147616ac6" ], "label.suggestion": "positive", "label.suggestion.agent": null, "label.suggestion.score": null, "ranking.suggestion": null, "ranking.suggestion.agent": null, "ranking.suggestion.score": null, "rating.suggestion": null, "rating.suggestion.agent": null, "rating.suggestion.score": null, "span.suggestion": null, "span.suggestion.agent": null, "span.suggestion.score": null, "text": "Hello World, how are you?", "topics.suggestion": [ "topic1", "topic2" ], "topics.suggestion.agent": null, "topics.suggestion.score": [ 0.9, 0.8 ], "vector": null } ``` ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]