
ivrit-ai/whisper-large-v3
Automatic Speech Recognition
•
Updated
•
3.24k
•
5
This is a whisper formatted version of the ivrit.ai Knesset Plenums dataset.
Thid dataset was created by splitting long audio recordings, along with their respective transcriptions into 30s or less audio slices. Each surch slice represents one or more consecutive segments, along with timestamp token data and prev-slice transcription.
The code for this dataset prepearation process is available on the ivrit.ai ASR Training Github repo.
Each example in the dataset contains:
audio
: An audio column containing:bytes
: The audio data encoded in MP3 formatpath
: A string identifier derived from the source entry IDtranscript
: A string containing the text with potentially Whisper-style timestamp tokens (e.g., <|0.00|>text<|2.40|>
) if "has_timestamps" is truemetadata
: A dictionary containing:seek
: Float indicating the start time of this slice in the original source audiosource
: String identifier for the source of the audio (Name of podcast, production system, etc.)entry_id
: Unique identifier for the source entryquality_score
: Segment median quality scoreplenum_date
: Date of the plenumhas_prev
: Boolean indicating if this slice has transcript from the previous slice within the audio sourcehas_timestamps
: Boolean indicating if the transcript contains timestamp tokensprev_transcript
: String containing the transcript of the previous slice (empty if has_prev
is false)