sign-whisper-german / README.md
mrprimenotes's picture
Update README.md
86affba verified
|
raw
history blame
3.83 kB
metadata
license: apache-2.0
language:
  - de
tags:
  - sign-language
  - whisper
  - german
  - safetensors
library_name: transformers
model-index:
  - name: whisper-large-v3-turbo-german
    results:
      - task:
          type: automatic-speech-recognition
          name: Speech Recognition
        dataset:
          name: German ASR Data-Mix
          type: flozi00/asr-german-mixed
        metrics:
          - type: wer
            value: TBD
datasets:
  - flozi00/asr-german-mixed
base_model:
  - primeline/whisper-large-v3-german

Summary

Whisper is a powerful speech recognition platform developed by OpenAI. This model has been specially optimized for converting sign language input features into german text.

Applications

The model is based on 'primeline/whisper-large-v3-german' and used (in combination with google mediapipe) to translate a video of german sign language into text. This model decodes a sequence of input features, where each input feature represents keypoints extracted from a video (body hands, upper body and face), into text.

We keep the decoder frozen, while training the encoder.

Evaluations - Word error rate

TBD

Training data

TBD

Training process

import torch
from transformers import WhisperForConditionalGeneration, AutoProcessor, AutoTokenizer, AutoConfig

device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

# When changing the configuration of the preprocessing convolution layers make sure their final output has the shape b x 1280 x seq.
# See custom config in model.py for configuration options.

config = AutoConfig.from_pretrained(
    "mrprimenotes/sign-whisper-german",
    use_first_embeddings=True,
    embedding_stride=2,
    conv_dropout=0.1,
    skip_connections=True,
    conv_preprocessing_layers=[
        {
            "in_channels": 80,
            "out_channels": 384,
            "kernel_size": 5,
            "padding": 2,
            "activation": "gelu"
        },
        {
            "in_channels": 384,
            "out_channels": 384,
            "kernel_size": 3,
            "stride": 2,
            "padding": 1,
            "activation": "gelu"
        }
    ]
)

tokenizer = AutoTokenizer.from_pretrained("mrprimenotes/sign-whisper-german")

# raw model outputs:
# output = model(input_features, labels=labels)
# e.g.
# output.loss
# output.shape --> b x sq

train_dataset = YourSignDataset(...)
val_dataset = YourSignDataset(...)

# Define training arguments
training_args = TrainingArguments(
    output_dir="./sign-whisper-german",
    num_train_epochs=3,
    per_device_train_batch_size=1024,
    per_device_eval_batch_size=256,
    warmup_steps=500,
    weight_decay=0.01,

    # Logging settings
    logging_dir="./logs",
    logging_steps=50,
    logging_strategy="steps",

    # Evaluation
    evaluation_strategy="steps",
    eval_steps=100,

    # Saving
    save_strategy="steps",
    save_steps=100,
    save_total_limit=5,
    resume_from_checkpoint=True,

    load_best_model_at_end=True,
    fp16=torch.cuda.is_available(),
)

# Initialize trainer with tokenizer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=val_dataset,
    tokenizer=tokenizer,
)

# Train the model
trainer.train()

Use model for inference (with generate)

from transformers import TextStreamer

streamer = TextStreamer(tokenizer, skip_special_tokens=False) #only needed for streaming

# input preprocessing / feature extraction (TBD)
# input_features = ...

# Generate
generated_ids = model.generate(
    input_features,
    max_new_tokens=128,
    return_timestamps=False, #timestamps are not supported
    streamer=streamer #only needed for streaming
)

tokenizer.batch_decode(generated_ids, skip_special_tokens=False)