layoutlm-sroie_only

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0629
  • Ate: {'precision': 0.9090909090909091, 'recall': 1.0, 'f1': 0.9523809523809523, 'number': 50}
  • Ddress: {'precision': 0.86, 'recall': 0.86, 'f1': 0.8599999999999999, 'number': 50}
  • Ompany: {'precision': 0.7777777777777778, 'recall': 0.84, 'f1': 0.8076923076923077, 'number': 50}
  • Otal: {'precision': 0.4166666666666667, 'recall': 0.3, 'f1': 0.3488372093023256, 'number': 50}
  • Overall Precision: 0.7692
  • Overall Recall: 0.75
  • Overall F1: 0.7595
  • Overall Accuracy: 0.9820

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Ate Ddress Ompany Otal Overall Precision Overall Recall Overall F1 Overall Accuracy
0.5019 1.0 36 0.1120 {'precision': 0.671875, 'recall': 0.86, 'f1': 0.7543859649122807, 'number': 50} {'precision': 0.7454545454545455, 'recall': 0.82, 'f1': 0.780952380952381, 'number': 50} {'precision': 0.5344827586206896, 'recall': 0.62, 'f1': 0.574074074074074, 'number': 50} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 50} 0.6497 0.575 0.6101 0.9739
0.0635 2.0 72 0.0728 {'precision': 0.8448275862068966, 'recall': 0.98, 'f1': 0.9074074074074074, 'number': 50} {'precision': 0.86, 'recall': 0.86, 'f1': 0.8599999999999999, 'number': 50} {'precision': 0.7924528301886793, 'recall': 0.84, 'f1': 0.8155339805825242, 'number': 50} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 50} 0.8272 0.67 0.7403 0.9818
0.0429 3.0 108 0.0650 {'precision': 0.9090909090909091, 'recall': 1.0, 'f1': 0.9523809523809523, 'number': 50} {'precision': 0.86, 'recall': 0.86, 'f1': 0.8599999999999999, 'number': 50} {'precision': 0.7924528301886793, 'recall': 0.84, 'f1': 0.8155339805825242, 'number': 50} {'precision': 0.4117647058823529, 'recall': 0.28, 'f1': 0.3333333333333333, 'number': 50} 0.7760 0.745 0.7602 0.9818
0.0341 4.0 144 0.0629 {'precision': 0.9090909090909091, 'recall': 1.0, 'f1': 0.9523809523809523, 'number': 50} {'precision': 0.86, 'recall': 0.86, 'f1': 0.8599999999999999, 'number': 50} {'precision': 0.7777777777777778, 'recall': 0.84, 'f1': 0.8076923076923077, 'number': 50} {'precision': 0.4166666666666667, 'recall': 0.3, 'f1': 0.3488372093023256, 'number': 50} 0.7692 0.75 0.7595 0.9820

Framework versions

  • Transformers 4.50.0
  • Pytorch 2.1.0+cu118
  • Datasets 3.4.1
  • Tokenizers 0.21.1
Downloads last month
121
Safetensors
Model size
113M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for pabloma09/layoutlm-sroie_only

Finetuned
(169)
this model