metadata
library_name: transformers
language:
- fr
license: mit
base_model: pyannote/segmentation-3.0
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
datasets:
- CAENNAIS
model-index:
- name: model.no2_expe.dia.1.A_data_ESLO_06.05.25
results: []
model.no2_expe.dia.1.A_data_ESLO_06.05.25
This model is a fine-tuned version of pyannote/segmentation-3.0 on the CAENNAIS dataset. It achieves the following results on the evaluation set:
- Loss: 0.7552
- Model Preparation Time: 0.0039
- Der: 0.4612
- False Alarm: 0.1526
- Missed Detection: 0.2132
- Confusion: 0.0953
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion |
---|---|---|---|---|---|---|---|---|
0.8598 | 1.0 | 270 | 0.8043 | 0.0039 | 0.5158 | 0.1643 | 0.2233 | 0.1282 |
0.8108 | 2.0 | 540 | 0.7766 | 0.0039 | 0.4948 | 0.1529 | 0.2280 | 0.1139 |
0.795 | 3.0 | 810 | 0.7713 | 0.0039 | 0.4886 | 0.1697 | 0.1979 | 0.1210 |
0.7601 | 4.0 | 1080 | 0.7588 | 0.0039 | 0.4746 | 0.1590 | 0.2119 | 0.1037 |
0.7705 | 5.0 | 1350 | 0.7490 | 0.0039 | 0.4675 | 0.1374 | 0.2429 | 0.0871 |
0.7667 | 6.0 | 1620 | 0.7848 | 0.0039 | 0.4784 | 0.1537 | 0.2247 | 0.1000 |
0.7198 | 7.0 | 1890 | 0.7692 | 0.0039 | 0.4855 | 0.1353 | 0.2505 | 0.0996 |
0.7266 | 8.0 | 2160 | 0.7474 | 0.0039 | 0.4671 | 0.1448 | 0.2267 | 0.0955 |
0.7011 | 9.0 | 2430 | 0.7509 | 0.0039 | 0.4622 | 0.1675 | 0.1915 | 0.1032 |
0.717 | 10.0 | 2700 | 0.7523 | 0.0039 | 0.4656 | 0.1578 | 0.2123 | 0.0955 |
0.7083 | 11.0 | 2970 | 0.7439 | 0.0039 | 0.4624 | 0.1443 | 0.2320 | 0.0861 |
0.7153 | 12.0 | 3240 | 0.7462 | 0.0039 | 0.4614 | 0.1548 | 0.2091 | 0.0975 |
0.6498 | 13.0 | 3510 | 0.7512 | 0.0039 | 0.4663 | 0.1564 | 0.2122 | 0.0978 |
0.6935 | 14.0 | 3780 | 0.7501 | 0.0039 | 0.4621 | 0.1533 | 0.2139 | 0.0950 |
0.6746 | 15.0 | 4050 | 0.7534 | 0.0039 | 0.4632 | 0.1544 | 0.2139 | 0.0950 |
0.6751 | 16.0 | 4320 | 0.7515 | 0.0039 | 0.4627 | 0.1552 | 0.2122 | 0.0952 |
0.6848 | 17.0 | 4590 | 0.7542 | 0.0039 | 0.4632 | 0.1516 | 0.2187 | 0.0928 |
0.6759 | 18.0 | 4860 | 0.7577 | 0.0039 | 0.4605 | 0.1534 | 0.2121 | 0.0951 |
0.6856 | 19.0 | 5130 | 0.7540 | 0.0039 | 0.4609 | 0.1525 | 0.2135 | 0.0949 |
0.7036 | 20.0 | 5400 | 0.7552 | 0.0039 | 0.4612 | 0.1526 | 0.2132 | 0.0953 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0