Edit model card

Language Classification

A model trained for language classification. Thanks to @sanchit-gandhi for this code which was used to train the model.

This model was trained for 15 epochs.

Evaluation

It achieves the following results on the evaluation set:

  • Loss: 1.1229
  • Accuracy: 0.7401

Hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 16
  • seed: 0
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 15.0
  • mixed_precision_training: Native AMP

Training Results

Training Loss Epoch Step Validation Loss Accuracy
3.1638 1.0 347 3.0152 0.4200
2.0788 2.0 694 1.9700 0.5504
1.4236 3.0 1041 1.5048 0.6374
1.0305 4.0 1388 1.2979 0.6685
0.7651 5.0 1735 1.1692 0.7023
0.5782 6.0 2082 1.0896 0.7227
0.4483 7.0 2429 1.0605 0.7198
0.3253 8.0 2776 1.0255 0.7376
0.2589 9.0 3123 1.0478 0.7354
0.1825 10.0 3470 1.0677 0.7318
0.1489 11.0 3817 1.0946 0.7373
0.1274 12.0 4164 1.1180 0.7376
0.1074 13.0 4511 1.1229 0.7401
0.0979 14.0 4858 1.1523 0.7383
0.0914 15.0 5205 1.1498 0.7401

Disclaimer

THE MODEL IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS MODEL INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS MODEL.

Downloads last month
12
Safetensors
Model size
20.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ml-for-speech/language-classification