|
--- |
|
license: apache-2.0 |
|
base_model: allenai/longformer-base-4096 |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- precision |
|
- recall |
|
- f1 |
|
model-index: |
|
- name: results-improved |
|
results: [] |
|
library_name: transformers |
|
pipeline_tag: token-classification |
|
language: |
|
- en |
|
--- |
|
|
|
**Training and Evaluation Metrics** |
|
|
|
| Metric | Value | |
|
|---------------------------|--------------------------------| |
|
| **Evaluation Recall** | 0.8099 | |
|
| **Training Runtime** | 3761.66 seconds | |
|
| **Evaluation Steps/Sec** | 0.304 | |
|
| **Total FLOPs** | 55,842,753,586,544,640 | |
|
| **Training Loss** | 0.3471 | |
|
| **Training Loss (Epoch)** | 0.7509 | |
|
| **Training Epochs** | 5 | |
|
| **Evaluation Accuracy** | 0.8099 | |
|
| **Gradient Norm** | 5.0118 | |
|
| **Learning Rate** | 1.88e-8 | |
|
| **Evaluation Samples/Sec** | 23.205 | |
|
| **Evaluation F1 Score** | 0.8132 | |
|
| **Wandb Runtime** | 3822 seconds | |
|
| **Runtime** | 3821.17 seconds | |
|
| **Evaluation Loss** | 0.5319 | |
|
| **Evaluation Runtime** | 46.067 seconds | |
|
| **Evaluation Precision** | 0.8206 | |
|
| **Global Training Steps** | 270 | |
|
| **Training Steps/Sec** | 0.072 | |
|
| **Training Samples/Sec** | 5.681 | |