full-ja4-2.25sec-big-rf

This model is a fine-tuned version of pyannote/segmentation-3.0 on the objects76/synthetic-ja4-speaker-overlap-6400 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4142
  • Der: 0.1244
  • False Alarm: 0.0536
  • Missed Detection: 0.0531
  • Confusion: 0.0177

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 2048
  • eval_batch_size: 2048
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 200

Training results

Training Loss Epoch Step Confusion Der False Alarm Validation Loss Missed Detection
No log 1.0 7 0.0594 0.2999 0.1200 0.9837 0.1206
No log 2.0 14 0.0651 0.2710 0.0724 0.8711 0.1335
No log 3.0 21 0.0775 0.2659 0.0140 0.8192 0.1744
0.9138 4.0 28 0.0714 0.2422 0.0283 0.7813 0.1425
0.9138 5.0 35 0.0569 0.2231 0.0685 0.7536 0.0977
0.9138 6.0 42 0.0564 0.2136 0.0681 0.7049 0.0891
0.9138 7.0 49 0.0509 0.2050 0.0815 0.6687 0.0726
0.7407 8.0 56 0.0470 0.2004 0.0872 0.6428 0.0661
0.7407 9.0 63 0.0459 0.1950 0.0827 0.6224 0.0664
0.7407 10.0 70 0.0430 0.1915 0.0874 0.6045 0.0611
0.6308 11.0 77 0.0418 0.1877 0.0839 0.5877 0.0620
0.6308 12.0 84 0.0391 0.1828 0.0812 0.5714 0.0625
0.6308 13.0 91 0.0356 0.1771 0.0820 0.5571 0.0595
0.6308 14.0 98 0.0335 0.1698 0.0755 0.5410 0.0608
0.5637 15.0 105 0.0310 0.1642 0.0753 0.5294 0.0579
0.5637 16.0 112 0.0293 0.1588 0.0691 0.5173 0.0604
0.5637 17.0 119 0.0278 0.1551 0.0687 0.5070 0.0586
0.5103 18.0 126 0.0279 0.1517 0.0634 0.4972 0.0604
0.5103 19.0 133 0.0272 0.1493 0.0625 0.4876 0.0596
0.5103 20.0 140 0.0264 0.1490 0.0643 0.4819 0.0583
0.5103 21.0 147 0.0260 0.1471 0.0605 0.4732 0.0606
0.4692 22.0 154 0.0251 0.1453 0.0607 0.4689 0.0595
0.4692 23.0 161 0.0250 0.1435 0.0601 0.4639 0.0584
0.4692 24.0 168 0.0238 0.1417 0.0622 0.4591 0.0556
0.4454 25.0 175 0.0241 0.1406 0.0571 0.4529 0.0594
0.4454 26.0 182 0.0234 0.1404 0.0612 0.4527 0.0558
0.4454 27.0 189 0.0229 0.1392 0.0560 0.4459 0.0602
0.4454 28.0 196 0.0217 0.1388 0.0634 0.4473 0.0536
0.423 29.0 203 0.0223 0.1375 0.0580 0.4434 0.0572
0.423 30.0 210 0.0225 0.1368 0.0564 0.4398 0.0579
0.423 31.0 217 0.0215 0.1361 0.0599 0.4379 0.0547
0.423 32.0 224 0.0220 0.1350 0.0553 0.4340 0.0578
0.3906 33.0 231 0.0216 0.1352 0.0561 0.4334 0.0575
0.3906 34.0 238 0.0211 0.1353 0.0602 0.4348 0.0541
0.3906 35.0 245 0.0211 0.1326 0.0551 0.4274 0.0565
0.3918 36.0 252 0.0214 0.1336 0.0577 0.4280 0.0545
0.3918 37.0 259 0.0220 0.1328 0.0545 0.4297 0.0562
0.3918 38.0 266 0.0220 0.1333 0.0572 0.4315 0.0541
0.3918 39.0 273 0.0220 0.1324 0.0533 0.4287 0.0571
0.3811 40.0 280 0.0202 0.1310 0.0577 0.4222 0.0531
0.3811 41.0 287 0.0202 0.1298 0.0540 0.4178 0.0556
0.3811 42.0 294 0.0201 0.1296 0.0545 0.4218 0.0549
0.3707 43.0 301 0.0189 0.1280 0.0535 0.4176 0.0556
0.3707 44.0 308 0.0189 0.1287 0.0556 0.4156 0.0542
0.3707 45.0 315 0.0192 0.1267 0.0524 0.4142 0.0551
0.3707 46.0 322 0.0188 0.1269 0.0529 0.4141 0.0552
0.3636 47.0 329 0.0190 0.1283 0.0571 0.4188 0.0522
0.3636 48.0 336 0.0191 0.1264 0.0503 0.4129 0.0569
0.3636 49.0 343 0.0182 0.1276 0.0574 0.4167 0.0520
0.354 50.0 350 0.0191 0.1264 0.0511 0.4141 0.0562
0.354 51.0 357 0.0184 0.1264 0.0555 0.4132 0.0526
0.354 52.0 364 0.0193 0.1265 0.0510 0.4137 0.0562
0.354 53.0 371 0.0187 0.1267 0.0560 0.4153 0.0520
0.3411 54.0 378 0.4133 0.1255 0.0540 0.0534 0.0182
0.3411 55.0 385 0.4143 0.1251 0.0515 0.0549 0.0187
0.3411 56.0 392 0.4136 0.1254 0.0544 0.0524 0.0186
0.3411 57.0 399 0.4118 0.1253 0.0526 0.0538 0.0189
0.3413 58.0 406 0.4134 0.1259 0.0532 0.0541 0.0187
0.3413 59.0 413 0.4137 0.1255 0.0529 0.0542 0.0184
0.3413 60.0 420 0.4119 0.1243 0.0509 0.0547 0.0186
0.3381 61.0 427 0.4117 0.1249 0.0532 0.0534 0.0183
0.3381 62.0 434 0.4092 0.1258 0.0504 0.0570 0.0184
0.3381 63.0 441 0.4122 0.1254 0.0548 0.0529 0.0177
0.3381 64.0 448 0.4121 0.1253 0.0502 0.0560 0.0190
0.3351 65.0 455 0.4131 0.1255 0.0572 0.0511 0.0172
0.3351 66.0 462 0.4094 0.1241 0.0516 0.0551 0.0173
0.3351 67.0 469 0.4123 0.1246 0.0549 0.0525 0.0172
0.3296 68.0 476 0.4107 0.1243 0.0533 0.0536 0.0173
0.3296 69.0 483 0.4096 0.1233 0.0504 0.0552 0.0177
0.3296 70.0 490 0.4126 0.1250 0.0518 0.0552 0.0181
0.3296 71.0 497 0.4127 0.1248 0.0509 0.0558 0.0181
0.3242 72.0 504 0.4142 0.1244 0.0536 0.0531 0.0177

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
2
Safetensors
Model size
1.47M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for objects76/full-ja4-2.25sec-big-rf-250513_1709

Finetuned
(69)
this model

Dataset used to train objects76/full-ja4-2.25sec-big-rf-250513_1709