jnemecek's picture
update license
3ec4257
metadata
language:
  - mam
license: other
tags:
  - automatic-speech-recognition
  - sil-ai/bloom-speech
  - generated_from_trainer
datasets:
  - bloom_speech
model-index:
  - name: wav2vec2-bloom-speech-mam
    results:
      - task:
          name: Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Bloom Speech mam
          type: sil-ai/bloom-speech
          args: mam
        metrics:
          - name: Test WER
            type: wer
            value: 30.23
          - name: Test CER
            type: cer
            value: 7.87
extra_gated_prompt: >-
  One more step before getting this model.


  This model is open access and available only for non-commercial use, with an
  SIL International AI & NLP RAIL-M license further specifying rights and usage.


  The SIL RAIL-M License specifies: 


  1. You can't use the model to deliberately produce nor share illegal or
  harmful outputs or content. Particularly, you cannot use the model use with
  the intent or effect of harming or enabling discrimination against Indigenous
  People.

  2. SIL claims no rights on outputs you generate for non-commercial use, you
  are free to use them and are accountable for their use, which must not go
  against the provisions set in the license

  3. You may re-distribute the weights and use the model non-commercially
  including as a service. If you do, please be aware you have to include the
  same use restrictions as the ones in the license and share a copy of the SIL
  International AI & NLP RAIL-M to all your users (please read the license
  entirely and carefully). Please read the full license here:
  https://huggingface.co/spaces/sil-ai/model-license


  By clicking on "Access repository" below, you accept that your *contact
  information* (email address and username) can be shared with the model authors
  as well.


  If you would like to ask about commercial uses of this model, please [email
  us](mailto:[email protected]).
    
extra_gated_fields:
  I have read the License and agree with its terms: checkbox

wav2vec2-bloom-speech-mam

logo for Bloom Library sil-ai logo

Model description

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the SIL-AI/bloom-speech - MAM (Mam) dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5938
  • Wer: 0.3023
  • Cer: 0.0787

Users should refer to the original model for tutorials on using a trained model for inference.

Intended uses & limitations

Users of this model must abide by the SIL RAIL-M License.

This model is created as a proof of concept and no guarantees are made regarding the performance of the model is specific situations.

Training and evaluation data

Training, Validation, and Test datasets were generated from the same corpus, ensuring that no duplicate files were used.

Training procedure

Standard finetuning of XLS-R was used based on the examples in the Hugging Face Transformers Github

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • num_epochs: 1000.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
No log 6.1 250 3.1703 1.0 1.0
4.9494 12.2 500 2.8022 1.0 1.0
4.9494 18.29 750 1.3280 0.8842 0.3435
1.5795 24.39 1000 0.6121 0.5177 0.1293
1.5795 30.49 1250 0.5740 0.4759 0.1181
0.3087 36.59 1500 0.4996 0.3601 0.0899
0.3087 42.68 1750 0.5313 0.3730 0.0887
0.1772 48.78 2000 0.5345 0.3473 0.0818
0.1772 54.88 2250 0.5637 0.3408 0.0824
0.1331 60.98 2500 0.5938 0.3023 0.0787
0.1331 67.07 2750 0.5622 0.3376 0.0824
0.1147 73.17 3000 0.5609 0.3923 0.0943
0.1147 79.27 3250 0.5213 0.3344 0.0812

Framework versions

  • Transformers 4.21.0.dev0
  • Pytorch 1.9.0+cu111
  • Datasets 2.2.2
  • Tokenizers 0.12.1