--- pretty_name: Danske Taler language: - da license: cc0-1.0 license_name: CC-0 task_categories: - text-generation - fill-mask task_ids: - language-modeling domains: - Conversation - Speeches - Spoken --- # Dataset Card for Danske Taler Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk). The database dansketaler.dk is managed by Danske Taler, an independent institution that in addition to managing the database and carries out cultural and democratic projects based on speeches. Danske Taler state as their goals that they seek to preserve our cultural heritage and promotes active citizenship and democratic confidence through its work. Additionally, Danske Taler provides data to a number of online resources, including: lex.dk, sprogteknologi.dk, and ordnet.dk. The goal of the dataset is to collect historical and timely speeches and make them available for the public. Learn more about danske taler by reading their [about us](https://www.dansketaler.dk/om-os) page. > NOTE: Danske-Taler is also collecting [sermons](https://www.dansketaler.dk/praedikener), but these are not included in this dataset. ## Dataset Description - **Language**: dan, dansk, Danish - **Domains**: Conversation, Speeches, Spoken - **Number of samples**: 2.91K - **Number of tokens (Llama 3)**: 8.72M - **Average document length (characters)**: 9140.42 ## Dataset Structure An example from the dataset looks as follows. ```py { "id": "danske-taler_281", "text": "Tyske landsmænd og -kvinder !\nSyv år er kort tid, en brøkdel af en enkel menneskelig normaltilværels[...]", "source": "danske-taler", "added": "2025-06-24", "created": "1940-01-30, 1940-01-30", "token_count": 3020 } ``` ### Data Fields An entry in the dataset consists of the following fields: - `id` (`str`): An unique identifier for each document. - `text`(`str`): The content of the document. - `source` (`str`): The source of the document (see [Source Data](#source-data)). - `added` (`str`): An date for when the document was added to this collection. - `created` (`str`): An date range for when the document was originally created. - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer ### Dataset Statistics