--- tags: - time series - time series classification - monster - audio license: mit size_categories: - 10K. |AudioMNIST|| |-|-:| |Category|Audio| |Num. Examples|30,000| |Num. Channels|1| |Length|47,998| |Sampling Freq.|48 kHz| |Num. Classes|10| |License|[MIT](https://opensource.org/license/mit)| |Citations|[1] [2]| ***AudioMNIST*** consists of audio recordings of 60 different speakers saying the digits 0 to 9, with 50 recordings per digit per speaker [1, 2]. The speakers are a mixture of ages and genders. The recordings are single channel have a sampling rate of 48 kHz. The learning task is to classify the spoken digit based on the audio recording. The processed dataset contains 30,000 (univariate) time series, each of length 47,998 (approximately 1 second of data sampled at 48 kHz), with ten classes representing the digits 0 to 9. This version of the dataset has been split into cross-validation folds based on speaker (i.e., such that recordings for a given speaker do not appear in both the training and validation sets). ***AudioMNIST-DS*** is a variant of the same dataset where the time series have been downsampled to a length of 4,000 (i.e., effectively 4 kHz). [1] Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, and Wojciech Samek. (2024). AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark. *Journal of the Franklin Institute*, 361(1):418–428. [2] Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, and Wojciech Samek. (2024). AudioMNIST. . MIT License.