Update model card to clarify fine-tuning objective: mitigating hallucination on out-of-distribution data
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ tags:
|
|
9 |
- bdd100k
|
10 |
- autonomous-driving
|
11 |
- BDD 100K
|
12 |
-
-
|
13 |
pipeline_tag: object-detection
|
14 |
datasets:
|
15 |
- bdd100k
|
@@ -31,15 +31,15 @@ model-index:
|
|
31 |
value: "TBD"
|
32 |
---
|
33 |
|
34 |
-
# YOLOv10 - Berkeley DeepDrive (BDD) 100K
|
35 |
|
36 |
-
YOLOv10 model fine-tuned on Berkeley DeepDrive (BDD) 100K dataset
|
37 |
|
38 |
## Model Details
|
39 |
|
40 |
- **Model Type**: YOLOv10 Object Detection
|
41 |
- **Dataset**: Berkeley DeepDrive (BDD) 100K
|
42 |
-
- **Training Method**: fine-tuned
|
43 |
- **Framework**: PyTorch/Ultralytics
|
44 |
- **Task**: Object Detection
|
45 |
|
@@ -79,7 +79,9 @@ for result in results:
|
|
79 |
|
80 |
## Model Performance
|
81 |
|
82 |
-
This model was fine-tuned on the Berkeley DeepDrive (BDD) 100K dataset using YOLOv10 architecture.
|
|
|
|
|
83 |
|
84 |
## Intended Use
|
85 |
|
|
|
9 |
- bdd100k
|
10 |
- autonomous-driving
|
11 |
- BDD 100K
|
12 |
+
- from-scratch
|
13 |
pipeline_tag: object-detection
|
14 |
datasets:
|
15 |
- bdd100k
|
|
|
31 |
value: "TBD"
|
32 |
---
|
33 |
|
34 |
+
# YOLOv10 - Berkeley DeepDrive (BDD) 100K Vanilla
|
35 |
|
36 |
+
YOLOv10 model fine-tuned on Berkeley DeepDrive (BDD) 100K dataset to mitigate hallucination on out-of-distribution data in autonomous driving scenarios.
|
37 |
|
38 |
## Model Details
|
39 |
|
40 |
- **Model Type**: YOLOv10 Object Detection
|
41 |
- **Dataset**: Berkeley DeepDrive (BDD) 100K
|
42 |
+
- **Training Method**: fine-tuned to mitigate hallucination on out-of-distribution data
|
43 |
- **Framework**: PyTorch/Ultralytics
|
44 |
- **Task**: Object Detection
|
45 |
|
|
|
79 |
|
80 |
## Model Performance
|
81 |
|
82 |
+
This model was fine-tuned to mitigate hallucination on out-of-distribution data on the Berkeley DeepDrive (BDD) 100K dataset using YOLOv10 architecture.
|
83 |
+
|
84 |
+
**Fine-tuning Objective**: This model was specifically fine-tuned to mitigate hallucination on out-of-distribution (OOD) data, improving robustness when encountering images that differ from the training distribution.
|
85 |
|
86 |
## Intended Use
|
87 |
|