layoutlm-sroie_synthetic

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1204
  • Ate: {'precision': 0.7636363636363637, 'recall': 0.8571428571428571, 'f1': 0.8076923076923076, 'number': 49}
  • Ddress: {'precision': 0.8653846153846154, 'recall': 0.9, 'f1': 0.8823529411764707, 'number': 50}
  • Ompany: {'precision': 0.9230769230769231, 'recall': 0.96, 'f1': 0.9411764705882353, 'number': 50}
  • Otal: {'precision': 0.9215686274509803, 'recall': 0.94, 'f1': 0.9306930693069307, 'number': 50}
  • Overall Precision: 0.8667
  • Overall Recall: 0.9146
  • Overall F1: 0.8900
  • Overall Accuracy: 0.9778

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Ate Ddress Ompany Otal Overall Precision Overall Recall Overall F1 Overall Accuracy
0.4783 1.0 43 0.1302 {'precision': 0.72, 'recall': 0.7346938775510204, 'f1': 0.7272727272727272, 'number': 49} {'precision': 0.8113207547169812, 'recall': 0.86, 'f1': 0.8349514563106797, 'number': 50} {'precision': 0.9423076923076923, 'recall': 0.98, 'f1': 0.9607843137254902, 'number': 50} {'precision': 0.7142857142857143, 'recall': 0.8, 'f1': 0.7547169811320756, 'number': 50} 0.7962 0.8442 0.8195 0.9682
0.0285 2.0 86 0.1126 {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 49} {'precision': 0.7962962962962963, 'recall': 0.86, 'f1': 0.826923076923077, 'number': 50} {'precision': 0.9230769230769231, 'recall': 0.96, 'f1': 0.9411764705882353, 'number': 50} {'precision': 0.9215686274509803, 'recall': 0.94, 'f1': 0.9306930693069307, 'number': 50} 0.8451 0.9045 0.8738 0.9764
0.0119 3.0 129 0.1177 {'precision': 0.7962962962962963, 'recall': 0.8775510204081632, 'f1': 0.8349514563106796, 'number': 49} {'precision': 0.8846153846153846, 'recall': 0.92, 'f1': 0.9019607843137256, 'number': 50} {'precision': 0.9423076923076923, 'recall': 0.98, 'f1': 0.9607843137254902, 'number': 50} {'precision': 0.9215686274509803, 'recall': 0.94, 'f1': 0.9306930693069307, 'number': 50} 0.8852 0.9296 0.9069 0.9797
0.0078 4.0 172 0.1204 {'precision': 0.7636363636363637, 'recall': 0.8571428571428571, 'f1': 0.8076923076923076, 'number': 49} {'precision': 0.8653846153846154, 'recall': 0.9, 'f1': 0.8823529411764707, 'number': 50} {'precision': 0.9230769230769231, 'recall': 0.96, 'f1': 0.9411764705882353, 'number': 50} {'precision': 0.9215686274509803, 'recall': 0.94, 'f1': 0.9306930693069307, 'number': 50} 0.8667 0.9146 0.8900 0.9778

Framework versions

  • Transformers 4.50.0
  • Pytorch 2.1.0+cu118
  • Datasets 3.4.1
  • Tokenizers 0.21.1
Downloads last month
205
Safetensors
Model size
113M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for pabloma09/layoutlm-sroie_synthetic

Finetuned
(169)
this model