luisarizmendi commited on
Commit
e92402e
·
1 Parent(s): 84fdf13

model update

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +30 -92
  2. confusion_matrix_normalized.png +0 -0
  3. dev/object-detection-model-file/pytorch/Containerfile +21 -0
  4. run_model.py → dev/object-detection-model-file/pytorch/object-detection-pytorch.py +23 -19
  5. dev/object-detection-model-file/pytorch/requirements.txt +5 -0
  6. dev/prototyping.ipynb +460 -0
  7. example.png → images/example.png +0 -0
  8. spaces-example.png → images/spaces-example.png +0 -0
  9. results.png +0 -0
  10. train.ipynb +0 -0
  11. v1/model/onnx/1/model.onnx +3 -0
  12. v1/model/pytorch/best.pt +3 -0
  13. v1/test/F1_curve.png +0 -0
  14. v1/test/PR_curve.png +0 -0
  15. v1/test/P_curve.png +0 -0
  16. v1/test/R_curve.png +0 -0
  17. v1/test/confusion_matrix.png +0 -0
  18. v1/test/confusion_matrix_normalized.png +0 -0
  19. v1/test/val_batch0_labels.jpg +0 -0
  20. v1/test/val_batch0_pred.jpg +0 -0
  21. v1/test/val_batch1_labels.jpg +0 -0
  22. v1/test/val_batch1_pred.jpg +0 -0
  23. v1/test/val_batch2_labels.jpg +0 -0
  24. v1/test/val_batch2_pred.jpg +0 -0
  25. v1/train-val/F1_curve.png +0 -0
  26. v1/train-val/PR_curve.png +0 -0
  27. v1/train-val/P_curve.png +0 -0
  28. v1/train-val/R_curve.png +0 -0
  29. v1/train-val/args.yaml +106 -0
  30. v1/train-val/confusion_matrix.png +0 -0
  31. v1/train-val/confusion_matrix_normalized.png +0 -0
  32. v1/train-val/events.out.tfevents.1738747289.yolo-training-pipeline-7vd94-system-container-impl-3115668440.58.0 +3 -0
  33. v1/train-val/labels.jpg +0 -0
  34. v1/train-val/labels_correlogram.jpg +0 -0
  35. v1/train-val/results.csv +66 -0
  36. v1/train-val/results.png +0 -0
  37. v1/train-val/train_batch0.jpg +0 -0
  38. v1/train-val/train_batch1.jpg +0 -0
  39. v1/train-val/train_batch2.jpg +0 -0
  40. v1/train-val/train_batch27555.jpg +0 -0
  41. v1/train-val/train_batch27556.jpg +0 -0
  42. v1/train-val/train_batch27557.jpg +0 -0
  43. v1/train-val/val_batch0_labels.jpg +0 -0
  44. v1/train-val/val_batch0_pred.jpg +0 -0
  45. v1/train-val/val_batch1_labels.jpg +0 -0
  46. v1/train-val/val_batch1_pred.jpg +0 -0
  47. v1/train-val/val_batch2_labels.jpg +0 -0
  48. v1/train-val/val_batch2_pred.jpg +0 -0
  49. v2/model/pytorch/best.pt +3 -0
  50. v2/test/F1_curve.png +0 -0
README.md CHANGED
@@ -1,65 +1,29 @@
1
- ---
2
- task_categories:
3
- - object-detection
4
- tags:
5
- - yolo
6
- - yolo11
7
- - hardhat
8
- - hat
9
- base_model:
10
- - Ultralytics/YOLO11
11
- widget:
12
- - text: "Helmet detection"
13
- output:
14
- url: example.png
15
- pipeline_tag: object-detection
16
- model-index:
17
- - name: hardhat-or-hat
18
- results:
19
- - task:
20
- type: object-detection
21
- dataset:
22
- type: safety-equipment
23
- name: Safety Equipment
24
- args:
25
- epochs: 35
26
- batch: 2
27
- imgsz: 640
28
- patience: 5
29
- optimizer: SGD
30
- lr0: 0.001
31
- lrf: 0.01
32
- momentum: 0.9
33
- weight_decay: 0.0005
34
- warmup_epochs: 3
35
- warmup_bias_lr: 0.01
36
- warmup_momentum: 0.8
37
- metrics:
38
- - type: precision
39
- name: Precision
40
- value: 0.844
41
- - type: recall
42
- name: Recall
43
- value: 0.847
44
- - type: mAP50
45
- name: mAP50
46
- value: 0.893
47
- - type: mAP50-95
48
- name: mAP50-95
49
- value: 0.546
50
- ---
51
 
52
  # Model for detecting Hardhats and Hats
53
 
54
 
55
  <div align="center">
56
- <img width="640" alt="luisarizmendi/hardhat-or-hat" src="example.png">
57
  </div>
58
 
59
  ## Model binary
60
 
61
- You can [download the model from here](https://github.com/luisarizmendi/ai-apps/raw/refs/heads/main/models/luisarizmendi/object-detection-hardhat-or-hat/object-detection-hardhat-or-hat-m.pt)
 
 
 
 
 
 
 
 
62
 
 
 
 
 
 
 
63
 
64
  ## Labels
65
 
@@ -70,43 +34,34 @@ You can [download the model from here](https://github.com/luisarizmendi/ai-apps/
70
  ```
71
 
72
 
73
- ## Base Model
74
-
75
- Ultralytics/YOLO11m
76
-
77
  ## Model metrics
78
 
79
- ```
80
- YOLO11m summary (fused): 303 layers, 20,032,345 parameters, 0 gradients, 67.7 GFLOPs
81
- Class Images Instances Box(P R mAP50 mAP50-95)
82
- all 1992 15306 0.844 0.847 0.893 0.546
83
- hat 244 287 0.869 0.811 0.876 0.578
84
- helmet 1202 3942 0.916 0.892 0.942 0.61
85
- no_helmet 741 11077 0.746 0.838 0.861 0.45
86
- ```
87
 
88
 
89
  <div align="center">
90
- <img width="640" alt="luisarizmendi/hardhat-or-hat" src="confusion_matrix_normalized.png"> <img width="640" alt="luisarizmendi/hardhat-or-hat" src="results.png">
91
  </div>
92
 
93
 
94
- ## Model Dataset
95
 
96
- [https://universe.roboflow.com/luisarizmendi/hardhat-or-hat](https://universe.roboflow.com/luisarizmendi/hardhat-or-hat)
97
 
 
98
 
99
- ## Model training
100
 
101
- You can [review the Jupyter notebook here](https://github.com/luisarizmendi/ai-apps/blob/main/dev/hardhat-or-hat/train.ipynb)
102
 
103
  ### Hyperparameters
104
 
105
  ```
106
- epochs: 35
107
- batch: 2
 
108
  imgsz: 640
109
- patience: 5
110
  optimizer: 'SGD'
111
  lr0: 0.001
112
  lrf: 0.01
@@ -117,23 +72,6 @@ warmup_bias_lr: 0.01
117
  warmup_momentum: 0.8
118
  ```
119
 
120
- ### Augmentation
121
-
122
- ```
123
- hsv_h=0.015, # Image HSV-Hue augmentationc
124
- hsv_s=0.7, # Image HSV-Saturation augmentation
125
- hsv_v=0.4, # Image HSV-Value augmentation
126
- degrees=10, # Image rotation (+/- deg)
127
- translate=0.1, # Image translation (+/- fraction)
128
- scale=0.3, # Image scale (+/- gain)
129
- shear=0.0, # Image shear (+/- deg)
130
- perspective=0.0, # Image perspective
131
- flipud=0.1, # Image flip up-down
132
- fliplr=0.1, # Image flip left-right
133
- mosaic=1.0, # Image mosaic
134
- mixup=0.0, # Image mixup
135
- ```
136
-
137
 
138
  ## Model Usage
139
 
@@ -144,7 +82,7 @@ If you don't want to run it locally, you can use [this huggingface space](https:
144
  Remember to check that the Model URL is pointing to the model that you want to test.
145
 
146
  <div align="center">
147
- <img width="640" alt="luisarizmendi/hardhat-or-hat" src="https://huggingface.co/luisarizmendi/hardhat-or-hat/resolve/main/spaces-example.png">
148
  </div>
149
 
150
 
@@ -161,7 +99,7 @@ opencv-python
161
  torch
162
  ```
163
 
164
- Then [run the python code below ](https://github.com/luisarizmendi/ai-apps/raw/refs/heads/main/models/luisarizmendi/object-detector-hardhat-or-hat/run_model.py) and open `http://localhost:7860` in a browser to upload and scan the images.
165
 
166
 
167
  ```
@@ -172,7 +110,7 @@ import os
172
  import cv2
173
  import torch
174
 
175
- DEFAULT_MODEL_URL = "https://github.com/luisarizmendi/ai-apps/raw/refs/heads/main/models/luisarizmendi/object-detection-hardhat-or-hat/object-detection-hardhat-or-hat-m.pt"
176
 
177
  def detect_objects_in_files(model_input, files):
178
  """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  # Model for detecting Hardhats and Hats
3
 
4
 
5
  <div align="center">
6
+ <img width="640" alt="luisarizmendi/hardhat-or-hat" src="images/example.png">
7
  </div>
8
 
9
  ## Model binary
10
 
11
+ You can [download the model from here](https://huggingface.co/luisarizmendi/hardhat-or-hat/tree/main/v2/model/pytorch/best.pt)
12
+
13
+
14
+ ## Base Model
15
+
16
+ Ultralytics/YOLO11m
17
+
18
+
19
+ ## Huggingface page
20
 
21
+ https://huggingface.co/luisarizmendi/hardhat-or-hat
22
+
23
+
24
+ ## Model Dataset
25
+
26
+ [https://universe.roboflow.com/luisarizmendi/hardhat-or-hat](https://universe.roboflow.com/luisarizmendi/hardhat-or-hat)
27
 
28
  ## Labels
29
 
 
34
  ```
35
 
36
 
 
 
 
 
37
  ## Model metrics
38
 
39
+ <div align="center">
40
+ <img width="640" alt="luisarizmendi/hardhat-or-hat" src="v2/train-val/results.png">
41
+ </div>
42
+
 
 
 
 
43
 
44
 
45
  <div align="center">
46
+ <img width="640" alt="luisarizmendi/hardhat-or-hat" src="v2/train-val/confusion_matrix_normalized.png">
47
  </div>
48
 
49
 
 
50
 
51
+ ## Model training
52
 
53
+ You can [review the Jupyter notebook here](dev/prototyping.ipynb)
54
 
 
55
 
 
56
 
57
  ### Hyperparameters
58
 
59
  ```
60
+ base model: yolov11x.pt
61
+ epochs: 150
62
+ batch: 16
63
  imgsz: 640
64
+ patience: 15
65
  optimizer: 'SGD'
66
  lr0: 0.001
67
  lrf: 0.01
 
72
  warmup_momentum: 0.8
73
  ```
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Model Usage
77
 
 
82
  Remember to check that the Model URL is pointing to the model that you want to test.
83
 
84
  <div align="center">
85
+ <img width="640" alt="luisarizmendi/hardhat-or-hat" src="images/spaces-example.png">
86
  </div>
87
 
88
 
 
99
  torch
100
  ```
101
 
102
+ Then [run the python code below ](dev/object-detection-model-file/pytorch/object-detection-pytorch.py) and open `http://localhost:8800` in a browser to upload and scan the images.
103
 
104
 
105
  ```
 
110
  import cv2
111
  import torch
112
 
113
+ DEFAULT_MODEL_URL = "https://huggingface.co/luisarizmendi/hardhat-or-hat/tree/main/v2/model/pytorch/best.pt"
114
 
115
  def detect_objects_in_files(model_input, files):
116
  """
confusion_matrix_normalized.png DELETED
Binary file (116 kB)
 
dev/object-detection-model-file/pytorch/Containerfile ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM registry.access.redhat.com/ubi9/python-39:latest AS base
2
+
3
+ USER root
4
+
5
+ RUN dnf install -y \
6
+ mesa-libGL \
7
+ && dnf clean all \
8
+ && python3 -m ensurepip --upgrade
9
+
10
+ COPY requirements.txt /opt/app-root/src/
11
+
12
+ RUN python3 -m pip install --upgrade pip
13
+ RUN python3 -m pip install --no-cache-dir -r /opt/app-root/src/requirements.txt
14
+
15
+ COPY object-detection-pytorch.py /opt/app-root/src
16
+
17
+ WORKDIR /opt/app-root/src
18
+
19
+ EXPOSE 8800
20
+
21
+ CMD ["python", "object-detection-pytorch.py"]
run_model.py → dev/object-detection-model-file/pytorch/object-detection-pytorch.py RENAMED
@@ -2,10 +2,19 @@ import gradio as gr
2
  from ultralytics import YOLO
3
  from PIL import Image
4
  import os
5
- import cv2
6
- import torch
7
 
8
- DEFAULT_MODEL_URL = "https://github.com/luisarizmendi/ai-apps/raw/refs/heads/main/models/luisarizmendi/object-detector-hardhat-or-hat/object-detector-hardhat-or-hat.pt"
 
 
 
 
 
 
 
 
 
9
 
10
  def detect_objects_in_files(model_input, files):
11
  """
@@ -14,37 +23,32 @@ def detect_objects_in_files(model_input, files):
14
  if not files:
15
  return "No files uploaded.", []
16
 
17
- model = YOLO(str(model_input))
18
- if torch.cuda.is_available():
19
- model.to('cuda')
20
- print("Using GPU for inference")
21
- else:
22
- print("Using CPU for inference")
23
-
24
  results_images = []
25
  for file in files:
26
  try:
27
  image = Image.open(file).convert("RGB")
28
- results = model(image)
29
  result_img_bgr = results[0].plot()
30
  result_img_rgb = cv2.cvtColor(result_img_bgr, cv2.COLOR_BGR2RGB)
31
- results_images.append(result_img_rgb)
32
-
33
  # If you want that images appear one by one (slower)
34
- #yield "Processing image...", results_images
35
-
36
  except Exception as e:
37
  return f"Error processing file: {file}. Exception: {str(e)}", []
38
 
39
- del model
40
  torch.cuda.empty_cache()
41
-
42
  return "Processing completed.", results_images
43
 
44
  interface = gr.Interface(
45
  fn=detect_objects_in_files,
46
  inputs=[
47
- gr.Textbox(value=DEFAULT_MODEL_URL, label="Model URL", placeholder="Enter the model URL"),
48
  gr.Files(file_types=["image"], label="Select Images"),
49
  ],
50
  outputs=[
@@ -56,4 +60,4 @@ interface = gr.Interface(
56
  )
57
 
58
  if __name__ == "__main__":
59
- interface.launch()
 
2
  from ultralytics import YOLO
3
  from PIL import Image
4
  import os
5
+ import cv2
6
+ import torch
7
 
8
+
9
+ def load_model(model_input):
10
+ model = YOLO(model_input)
11
+ if torch.cuda.is_available():
12
+ model.to('cuda')
13
+ print("Using GPU for inference")
14
+ else:
15
+ print("Using CPU for inference")
16
+
17
+ return model
18
 
19
  def detect_objects_in_files(model_input, files):
20
  """
 
23
  if not files:
24
  return "No files uploaded.", []
25
 
26
+ model = load_model(model_input)
27
+
 
 
 
 
 
28
  results_images = []
29
  for file in files:
30
  try:
31
  image = Image.open(file).convert("RGB")
32
+ results = model(image)
33
  result_img_bgr = results[0].plot()
34
  result_img_rgb = cv2.cvtColor(result_img_bgr, cv2.COLOR_BGR2RGB)
35
+ results_images.append(result_img_rgb)
36
+
37
  # If you want that images appear one by one (slower)
38
+ #yield "Processing image...", results_images
39
+
40
  except Exception as e:
41
  return f"Error processing file: {file}. Exception: {str(e)}", []
42
 
43
+ del model
44
  torch.cuda.empty_cache()
45
+
46
  return "Processing completed.", results_images
47
 
48
  interface = gr.Interface(
49
  fn=detect_objects_in_files,
50
  inputs=[
51
+ gr.File(label="Upload Model file"),
52
  gr.Files(file_types=["image"], label="Select Images"),
53
  ],
54
  outputs=[
 
60
  )
61
 
62
  if __name__ == "__main__":
63
+ interface.launch(server_name="0.0.0.0", server_port=8800)
dev/object-detection-model-file/pytorch/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ gradio
2
+ ultralytics
3
+ Pillow
4
+ opencv-python
5
+ torch
dev/prototyping.ipynb ADDED
@@ -0,0 +1,460 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# YOLOv11 Training with Roboflow Dataset\n",
8
+ "\n",
9
+ "This notebook demonstrates how to train a YOLOv11 model using a dataset from Roboflow. It includes:\n",
10
+ "- Automatic GPU/CPU detection\n",
11
+ "- Configurable training parameters\n",
12
+ "- Training visualization and analysis\n",
13
+ "\n",
14
+ "## Step 1: Install Dependencies\n",
15
+ "First, we'll install the required packages."
16
+ ]
17
+ },
18
+ {
19
+ "cell_type": "code",
20
+ "execution_count": null,
21
+ "metadata": {},
22
+ "outputs": [],
23
+ "source": [
24
+ "# For Training\n",
25
+ "!pip install ultralytics roboflow \n",
26
+ "\n",
27
+ "# For Storage\n",
28
+ "!pip install minio"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "markdown",
33
+ "metadata": {},
34
+ "source": [
35
+ "## Step 2: Import Libraries\n",
36
+ "Import all necessary libraries for training and analysis."
37
+ ]
38
+ },
39
+ {
40
+ "cell_type": "code",
41
+ "execution_count": null,
42
+ "metadata": {},
43
+ "outputs": [],
44
+ "source": [
45
+ "# Common\n",
46
+ "import os\n",
47
+ "\n",
48
+ "# For Dataset manipulation\n",
49
+ "import yaml\n",
50
+ "from roboflow import Roboflow\n",
51
+ "\n",
52
+ "# For training\n",
53
+ "import torch\n",
54
+ "from ultralytics import YOLO\n",
55
+ "\n",
56
+ "# For Storage\n",
57
+ "from minio import Minio\n",
58
+ "from minio.error import S3Error"
59
+ ]
60
+ },
61
+ {
62
+ "cell_type": "markdown",
63
+ "metadata": {},
64
+ "source": [
65
+ "## Step 3: Download Dataset from Roboflow\n",
66
+ "Connect to Roboflow and download the dataset. Make sure to use your own API key and project details.\n",
67
+ "\n",
68
+ "**Remember to replace the placeholders with your values**."
69
+ ]
70
+ },
71
+ {
72
+ "cell_type": "code",
73
+ "execution_count": null,
74
+ "metadata": {},
75
+ "outputs": [],
76
+ "source": [
77
+ "rf = Roboflow(api_key=\"xxxxxxxxxxxxxxxxx\") # Replace with your API key\n",
78
+ "project = rf.workspace(\"yyyyyyyyyyyyyy\").project(\"zzzzzzzzzzzzzzzzzzz\") # Replace with your workspace and project names\n",
79
+ "version = project.version(1111111111111111111111111111) # Replace with your version number\n",
80
+ "dataset = version.download(\"yolov11\")"
81
+ ]
82
+ },
83
+ {
84
+ "cell_type": "markdown",
85
+ "metadata": {},
86
+ "source": [
87
+ "You'll need to explicitly specify the paths to each data split (training, validation, and test) in your configuration. This ensures YOLO can correctly locate and utilize your dataset files.\n",
88
+ "\n",
89
+ "This is done in the `data.yaml` file. If you open that file you will see these paths that you need to update:\n",
90
+ "\n",
91
+ "```\n",
92
+ "train: ../train/images\n",
93
+ "val: ../valid/images\n",
94
+ "test: ../test/images\n",
95
+ "```"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": null,
101
+ "metadata": {},
102
+ "outputs": [],
103
+ "source": [
104
+ "print(f\"Dataset downloaded to: {dataset.location}\")\n",
105
+ "\n",
106
+ "dataset_yaml_path = f\"{dataset.location}/data.yaml\"\n",
107
+ "\n",
108
+ "with open(dataset_yaml_path, \"r\") as file:\n",
109
+ " data_config = yaml.safe_load(file)\n",
110
+ "\n",
111
+ "data_config[\"train\"] = f\"{dataset.location}/train/images\"\n",
112
+ "data_config[\"val\"] = f\"{dataset.location}/valid/images\"\n",
113
+ "data_config[\"test\"] = f\"{dataset.location}/test/images\"\n",
114
+ "\n",
115
+ "with open(dataset_yaml_path, \"w\") as file:\n",
116
+ " yaml.safe_dump(data_config, file)"
117
+ ]
118
+ },
119
+ {
120
+ "cell_type": "markdown",
121
+ "metadata": {},
122
+ "source": [
123
+ "## Step 4: Configure Hyperparameters\n",
124
+ "Set up GPU/CPU detection (code automatically detects and use GPU if available)."
125
+ ]
126
+ },
127
+ {
128
+ "cell_type": "code",
129
+ "execution_count": null,
130
+ "metadata": {},
131
+ "outputs": [],
132
+ "source": [
133
+ "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
134
+ "print(f\"Using device: {device} ({'GPU' if device.type == 'cuda' else 'CPU'})\")"
135
+ ]
136
+ },
137
+ {
138
+ "cell_type": "markdown",
139
+ "metadata": {},
140
+ "source": [
141
+ "Define all training parameters in a single configuration dictionary."
142
+ ]
143
+ },
144
+ {
145
+ "cell_type": "code",
146
+ "execution_count": null,
147
+ "metadata": {},
148
+ "outputs": [],
149
+ "source": [
150
+ "\n",
151
+ "CONFIG = {\n",
152
+ " 'name': 'yolo_hardhat',\n",
153
+ " 'model': 'yolo11m.pt', # Model size options: n, s, m, l, x\n",
154
+ " 'data': dataset.location + \"/data.yaml\",\n",
155
+ " 'epochs': 1, # Set the number of epochs (keep 1 for Mock Training)\n",
156
+ " 'batch': 1 , # Adjust batch size based on device\n",
157
+ " 'imgsz': 640,\n",
158
+ " 'patience': 15,\n",
159
+ " 'device': device,\n",
160
+ " \n",
161
+ " # Optimizer settings\n",
162
+ " 'optimizer': 'SGD',\n",
163
+ " 'lr0': 0.001,\n",
164
+ " 'lrf': 0.005,\n",
165
+ " 'momentum': 0.9,\n",
166
+ " 'weight_decay': 0.0005,\n",
167
+ " 'warmup_epochs': 3,\n",
168
+ " 'warmup_bias_lr': 0.01,\n",
169
+ " 'warmup_momentum': 0.8,\n",
170
+ " 'amp': False,\n",
171
+ " \n",
172
+ " # Data augmentation settings\n",
173
+ " 'augment': True,\n",
174
+ " 'hsv_h': 0.015, # HSV-Hue augmentation\n",
175
+ " 'hsv_s': 0.7, # HSV-Saturation augmentation\n",
176
+ " 'hsv_v': 0.4, # HSV-Value augmentation\n",
177
+ " 'degrees': 10, # Image rotation (+/- deg)\n",
178
+ " 'translate': 0.1, # Image translation\n",
179
+ " 'scale': 0.3, # Image scale\n",
180
+ " 'shear': 0.0, # Image shear\n",
181
+ " 'perspective': 0.0, # Image perspective\n",
182
+ " 'flipud': 0.1, # Flip up-down\n",
183
+ " 'fliplr': 0.1, # Flip left-right\n",
184
+ " 'mosaic': 1.0, # Mosaic augmentation\n",
185
+ " 'mixup': 0.0, # Mixup augmentation\n",
186
+ "}\n",
187
+ "\n",
188
+ "# Configure PyTorch for GPU memory allocation\n",
189
+ "os.environ[\"PYTORCH_CUDA_ALLOC_CONF\"] = \"expandable_segments:True\""
190
+ ]
191
+ },
192
+ {
193
+ "cell_type": "markdown",
194
+ "metadata": {},
195
+ "source": [
196
+ "## Step 5: Load Model\n",
197
+ "Initialize the YOLO model."
198
+ ]
199
+ },
200
+ {
201
+ "cell_type": "code",
202
+ "execution_count": null,
203
+ "metadata": {},
204
+ "outputs": [],
205
+ "source": [
206
+ "model = YOLO(CONFIG['model'])"
207
+ ]
208
+ },
209
+ {
210
+ "cell_type": "markdown",
211
+ "metadata": {},
212
+ "source": [
213
+ "## Step 6: Start Training\n",
214
+ "\n",
215
+ "Begin the training process. By default, the `train` method handles both \"training\" and \"validation\" sets. "
216
+ ]
217
+ },
218
+ {
219
+ "cell_type": "code",
220
+ "execution_count": null,
221
+ "metadata": {},
222
+ "outputs": [],
223
+ "source": [
224
+ "results_train = model.train(\n",
225
+ " name=CONFIG['name'],\n",
226
+ " data=CONFIG['data'],\n",
227
+ " epochs=CONFIG['epochs'],\n",
228
+ " batch=CONFIG['batch'],\n",
229
+ " imgsz=CONFIG['imgsz'],\n",
230
+ " patience=CONFIG['patience'],\n",
231
+ " device=CONFIG['device'],\n",
232
+ " verbose=True,\n",
233
+ " \n",
234
+ " # Optimizer parameters\n",
235
+ " optimizer=CONFIG['optimizer'],\n",
236
+ " lr0=CONFIG['lr0'],\n",
237
+ " lrf=CONFIG['lrf'],\n",
238
+ " momentum=CONFIG['momentum'],\n",
239
+ " weight_decay=CONFIG['weight_decay'],\n",
240
+ " warmup_epochs=CONFIG['warmup_epochs'],\n",
241
+ " warmup_bias_lr=CONFIG['warmup_bias_lr'],\n",
242
+ " warmup_momentum=CONFIG['warmup_momentum'],\n",
243
+ " amp=CONFIG['amp'],\n",
244
+ " \n",
245
+ " # Augmentation parameters\n",
246
+ " augment=CONFIG['augment'],\n",
247
+ " hsv_h=CONFIG['hsv_h'],\n",
248
+ " hsv_s=CONFIG['hsv_s'],\n",
249
+ " hsv_v=CONFIG['hsv_v'],\n",
250
+ " degrees=CONFIG['degrees'],\n",
251
+ " translate=CONFIG['translate'],\n",
252
+ " scale=CONFIG['scale'],\n",
253
+ " shear=CONFIG['shear'],\n",
254
+ " perspective=CONFIG['perspective'],\n",
255
+ " flipud=CONFIG['flipud'],\n",
256
+ " fliplr=CONFIG['fliplr'],\n",
257
+ " mosaic=CONFIG['mosaic'],\n",
258
+ " mixup=CONFIG['mixup'],\n",
259
+ ")"
260
+ ]
261
+ },
262
+ {
263
+ "cell_type": "markdown",
264
+ "metadata": {},
265
+ "source": [
266
+ "## Step 7: Evaluate Model\n",
267
+ "\n",
268
+ " Evaluate the model on the test set."
269
+ ]
270
+ },
271
+ {
272
+ "cell_type": "code",
273
+ "execution_count": null,
274
+ "metadata": {},
275
+ "outputs": [],
276
+ "source": [
277
+ "results_test = model.val(data=CONFIG['data'], split='test', device=CONFIG['device'], imgsz=CONFIG['imgsz'])\n",
278
+ "\n",
279
+ "#print(\"Test Results:\", results_test)"
280
+ ]
281
+ },
282
+ {
283
+ "cell_type": "markdown",
284
+ "metadata": {},
285
+ "source": [
286
+ "## Step 8: (Optional) Model Export\n",
287
+ "\n",
288
+ "Export the trained YOLO model to ONNX format for deployment."
289
+ ]
290
+ },
291
+ {
292
+ "cell_type": "code",
293
+ "execution_count": null,
294
+ "metadata": {},
295
+ "outputs": [],
296
+ "source": [
297
+ "model.export(format='onnx', imgsz=CONFIG['imgsz'])"
298
+ ]
299
+ },
300
+ {
301
+ "cell_type": "markdown",
302
+ "metadata": {},
303
+ "source": [
304
+ "Export the trained YOLO model to TorchScript"
305
+ ]
306
+ },
307
+ {
308
+ "cell_type": "code",
309
+ "execution_count": null,
310
+ "metadata": {},
311
+ "outputs": [],
312
+ "source": [
313
+ "#model.export(format=\"torchscript\")"
314
+ ]
315
+ },
316
+ {
317
+ "cell_type": "markdown",
318
+ "metadata": {},
319
+ "source": [
320
+ "## Step 9: Store the Model\n",
321
+ "\n",
322
+ "Save the trained model to the Object Storage system configured in your Workbench connection. \n",
323
+ "\n",
324
+ "Start by getting the credentials and configuring variables for accessing Object Storage."
325
+ ]
326
+ },
327
+ {
328
+ "cell_type": "code",
329
+ "execution_count": null,
330
+ "metadata": {},
331
+ "outputs": [],
332
+ "source": [
333
+ "AWS_S3_ENDPOINT_NAME = os.getenv(\"AWS_S3_ENDPOINT\", \"\").replace('https://', '').replace('http://', '')\n",
334
+ "AWS_ACCESS_KEY_ID = os.getenv(\"AWS_ACCESS_KEY_ID\")\n",
335
+ "AWS_SECRET_ACCESS_KEY = os.getenv(\"AWS_SECRET_ACCESS_KEY\")\n",
336
+ "AWS_S3_BUCKET = os.getenv(\"AWS_S3_BUCKET\")"
337
+ ]
338
+ },
339
+ {
340
+ "cell_type": "markdown",
341
+ "metadata": {},
342
+ "source": [
343
+ "Define the S3 client."
344
+ ]
345
+ },
346
+ {
347
+ "cell_type": "code",
348
+ "execution_count": null,
349
+ "metadata": {},
350
+ "outputs": [],
351
+ "source": [
352
+ "client = Minio(\n",
353
+ " AWS_S3_ENDPOINT_NAME,\n",
354
+ " access_key=AWS_ACCESS_KEY_ID,\n",
355
+ " secret_key=AWS_SECRET_ACCESS_KEY,\n",
356
+ " secure=True\n",
357
+ ")"
358
+ ]
359
+ },
360
+ {
361
+ "cell_type": "markdown",
362
+ "metadata": {},
363
+ "source": [
364
+ "Select files to be uploaded (files generated while training and validating the model)"
365
+ ]
366
+ },
367
+ {
368
+ "cell_type": "code",
369
+ "execution_count": null,
370
+ "metadata": {},
371
+ "outputs": [],
372
+ "source": [
373
+ "model_path_train = results_train.save_dir\n",
374
+ "weights_path = os.path.join(model_path_train, \"weights\")\n",
375
+ "model_path_test = results_test.save_dir\n",
376
+ "\n",
377
+ "files_train = [os.path.join(model_path_train, f) for f in os.listdir(model_path_train) if os.path.isfile(os.path.join(model_path_train, f))]\n",
378
+ "files_models = [os.path.join(weights_path, f) for f in os.listdir(weights_path) if os.path.isfile(os.path.join(weights_path, f))]\n",
379
+ "files_test = [os.path.join(model_path_test, f) for f in os.listdir(model_path_test) if os.path.isfile(os.path.join(model_path_test, f))]"
380
+ ]
381
+ },
382
+ {
383
+ "cell_type": "markdown",
384
+ "metadata": {},
385
+ "source": [
386
+ "Upload the files."
387
+ ]
388
+ },
389
+ {
390
+ "cell_type": "code",
391
+ "execution_count": null,
392
+ "metadata": {},
393
+ "outputs": [],
394
+ "source": [
395
+ "directory_name= os.path.basename(model_path_train)\n",
396
+ "\n",
397
+ "for file_path_train in files_train:\n",
398
+ " try:\n",
399
+ " client.fput_object(AWS_S3_BUCKET, \"prototype/notebook/\" + directory_name + \"/train-val/\" + os.path.basename(file_path_train), file_path_train)\n",
400
+ " print(f\"'{os.path.basename(file_path_train)}' is successfully uploaded as object to bucket '{AWS_S3_BUCKET}'.\")\n",
401
+ " except S3Error as e:\n",
402
+ " print(\"Error occurred: \", e)\n",
403
+ "\n",
404
+ "for file_path_model in files_models:\n",
405
+ " try:\n",
406
+ " client.fput_object(AWS_S3_BUCKET, \"prototype/notebook/\" + directory_name + \"/\" + os.path.basename(file_path_model), file_path_model)\n",
407
+ " print(f\"'{os.path.basename(file_path_model)}' is successfully uploaded as object to bucket '{AWS_S3_BUCKET}'.\")\n",
408
+ " except S3Error as e:\n",
409
+ " print(\"Error occurred: \", e)\n",
410
+ "\n",
411
+ "for file_path_test in files_test:\n",
412
+ " try:\n",
413
+ " client.fput_object(AWS_S3_BUCKET, \"prototype/notebook/\" + directory_name + \"/test/\" + os.path.basename(file_path_test), file_path_test)\n",
414
+ " print(f\"'{os.path.basename(file_path_test)}' is successfully uploaded as object to bucket '{AWS_S3_BUCKET}'.\")\n",
415
+ " except S3Error as e:\n",
416
+ " print(\"Error occurred: \", e)"
417
+ ]
418
+ },
419
+ {
420
+ "cell_type": "markdown",
421
+ "metadata": {},
422
+ "source": [
423
+ "## Step 10: Remove local files\n",
424
+ "\n",
425
+ "Once you uploaded the Model data to the Object Storage, you can remove the local files to save disk space."
426
+ ]
427
+ },
428
+ {
429
+ "cell_type": "code",
430
+ "execution_count": null,
431
+ "metadata": {},
432
+ "outputs": [],
433
+ "source": [
434
+ "!rm -rf {model_path_train}\n",
435
+ "!rm -rf {model_path_test}"
436
+ ]
437
+ }
438
+ ],
439
+ "metadata": {
440
+ "kernelspec": {
441
+ "display_name": "Python 3",
442
+ "language": "python",
443
+ "name": "python3"
444
+ },
445
+ "language_info": {
446
+ "codemirror_mode": {
447
+ "name": "ipython",
448
+ "version": 3
449
+ },
450
+ "file_extension": ".py",
451
+ "mimetype": "text/x-python",
452
+ "name": "python",
453
+ "nbconvert_exporter": "python",
454
+ "pygments_lexer": "ipython3",
455
+ "version": "3.12.7"
456
+ }
457
+ },
458
+ "nbformat": 4,
459
+ "nbformat_minor": 4
460
+ }
example.png → images/example.png RENAMED
File without changes
spaces-example.png → images/spaces-example.png RENAMED
File without changes
results.png DELETED
Binary file (259 kB)
 
train.ipynb DELETED
The diff for this file is too large to render. See raw diff
 
v1/model/onnx/1/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f627d45c6e1b38d3c867310f852d4de27a618e8e956629a19a28220eefdb7ae8
3
+ size 227654959
v1/model/pytorch/best.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac685a2869e65cf547a1b0b5832fa210ab8b5f18d066720a6a6c8e9bcd7d3f90
3
+ size 114396114
v1/test/F1_curve.png ADDED
v1/test/PR_curve.png ADDED
v1/test/P_curve.png ADDED
v1/test/R_curve.png ADDED
v1/test/confusion_matrix.png ADDED
v1/test/confusion_matrix_normalized.png ADDED
v1/test/val_batch0_labels.jpg ADDED
v1/test/val_batch0_pred.jpg ADDED
v1/test/val_batch1_labels.jpg ADDED
v1/test/val_batch1_pred.jpg ADDED
v1/test/val_batch2_labels.jpg ADDED
v1/test/val_batch2_pred.jpg ADDED
v1/train-val/F1_curve.png ADDED
v1/train-val/PR_curve.png ADDED
v1/train-val/P_curve.png ADDED
v1/train-val/R_curve.png ADDED
v1/train-val/args.yaml ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: detect
2
+ mode: train
3
+ model: yolo11x.pt
4
+ data: /opt/app-root/src/Hardhat-or-Hat-4/data.yaml
5
+ epochs: 65
6
+ time: null
7
+ patience: 100
8
+ batch: 16
9
+ imgsz: 640
10
+ save: true
11
+ save_period: -1
12
+ cache: false
13
+ device: cuda:0
14
+ workers: 8
15
+ project: null
16
+ name: hardhat-v1
17
+ exist_ok: false
18
+ pretrained: true
19
+ optimizer: auto
20
+ verbose: true
21
+ seed: 0
22
+ deterministic: true
23
+ single_cls: false
24
+ rect: false
25
+ cos_lr: false
26
+ close_mosaic: 10
27
+ resume: false
28
+ amp: true
29
+ fraction: 1.0
30
+ profile: false
31
+ freeze: null
32
+ multi_scale: false
33
+ overlap_mask: true
34
+ mask_ratio: 4
35
+ dropout: 0.0
36
+ val: true
37
+ split: val
38
+ save_json: false
39
+ save_hybrid: false
40
+ conf: null
41
+ iou: 0.7
42
+ max_det: 300
43
+ half: false
44
+ dnn: false
45
+ plots: true
46
+ source: null
47
+ vid_stride: 1
48
+ stream_buffer: false
49
+ visualize: false
50
+ augment: false
51
+ agnostic_nms: false
52
+ classes: null
53
+ retina_masks: false
54
+ embed: null
55
+ show: false
56
+ save_frames: false
57
+ save_txt: false
58
+ save_conf: false
59
+ save_crop: false
60
+ show_labels: true
61
+ show_conf: true
62
+ show_boxes: true
63
+ line_width: null
64
+ format: torchscript
65
+ keras: false
66
+ optimize: false
67
+ int8: false
68
+ dynamic: false
69
+ simplify: true
70
+ opset: null
71
+ workspace: null
72
+ nms: false
73
+ lr0: 0.01
74
+ lrf: 0.01
75
+ momentum: 0.937
76
+ weight_decay: 0.0005
77
+ warmup_epochs: 3.0
78
+ warmup_momentum: 0.8
79
+ warmup_bias_lr: 0.1
80
+ box: 7.5
81
+ cls: 0.5
82
+ dfl: 1.5
83
+ pose: 12.0
84
+ kobj: 1.0
85
+ nbs: 64
86
+ hsv_h: 0.015
87
+ hsv_s: 0.7
88
+ hsv_v: 0.4
89
+ degrees: 0.0
90
+ translate: 0.1
91
+ scale: 0.5
92
+ shear: 0.0
93
+ perspective: 0.0
94
+ flipud: 0.0
95
+ fliplr: 0.5
96
+ bgr: 0.0
97
+ mosaic: 1.0
98
+ mixup: 0.0
99
+ copy_paste: 0.0
100
+ copy_paste_mode: flip
101
+ auto_augment: randaugment
102
+ erasing: 0.4
103
+ crop_fraction: 1.0
104
+ cfg: null
105
+ tracker: botsort.yaml
106
+ save_dir: runs/detect/hardhat-v1
v1/train-val/confusion_matrix.png ADDED
v1/train-val/confusion_matrix_normalized.png ADDED
v1/train-val/events.out.tfevents.1738747289.yolo-training-pipeline-7vd94-system-container-impl-3115668440.58.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c8e31adb9cbc1433ea21b05e89822932b62d41d0e4aaa9dd1ef96316016f722
3
+ size 575556
v1/train-val/labels.jpg ADDED
v1/train-val/labels_correlogram.jpg ADDED
v1/train-val/results.csv ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ epoch,time,train/box_loss,train/cls_loss,train/dfl_loss,metrics/precision(B),metrics/recall(B),metrics/mAP50(B),metrics/mAP50-95(B),val/box_loss,val/cls_loss,val/dfl_loss,lr/pg0,lr/pg1,lr/pg2
2
+ 1,420.197,1.76466,1.3675,1.41227,0.69048,0.59506,0.61857,0.2897,1.86412,43.6363,1.57201,0.000554558,0.000554558,0.000554558
3
+ 2,839.326,1.77756,1.26965,1.43135,0.74072,0.692,0.74204,0.36847,1.7298,18.9138,1.48066,0.00109331,0.00109331,0.00109331
4
+ 3,1258.67,1.73897,1.17632,1.39286,0.79341,0.71795,0.76678,0.37953,1.75397,16.0918,1.49124,0.00161515,0.00161515,0.00161515
5
+ 4,1674.2,1.68088,1.10288,1.37802,0.8302,0.71768,0.78607,0.40215,1.66923,10.7206,1.40826,0.00159083,0.00159083,0.00159083
6
+ 5,2090.74,1.64687,1.02348,1.34771,0.83179,0.74867,0.81312,0.4189,1.6205,10.304,1.40075,0.00156544,0.00156544,0.00156544
7
+ 6,2506.32,1.62056,0.98386,1.33049,0.83917,0.77561,0.84269,0.44661,1.56395,9.86296,1.35934,0.00154005,0.00154005,0.00154005
8
+ 7,2922.8,1.60215,0.95999,1.31377,0.83441,0.79257,0.84333,0.44116,1.57241,10.8663,1.34167,0.00151466,0.00151466,0.00151466
9
+ 8,3334.68,1.57973,0.92819,1.3091,0.87544,0.78634,0.85408,0.45163,1.55834,6.10121,1.3475,0.00148927,0.00148927,0.00148927
10
+ 9,3750.39,1.57057,0.90039,1.29688,0.87071,0.80639,0.86214,0.47051,1.52804,10.2566,1.32481,0.00146388,0.00146388,0.00146388
11
+ 10,4167.27,1.55595,0.88934,1.28636,0.8698,0.81789,0.86987,0.47231,1.53976,9.33876,1.31913,0.00143849,0.00143849,0.00143849
12
+ 11,4583.81,1.54215,0.87324,1.28506,0.88347,0.82256,0.87144,0.4812,1.48532,9.24971,1.30922,0.0014131,0.0014131,0.0014131
13
+ 12,5001.38,1.54151,0.86207,1.28353,0.8784,0.82969,0.88287,0.48807,1.49778,7.72873,1.32511,0.00138771,0.00138771,0.00138771
14
+ 13,5415.7,1.5211,0.84013,1.27191,0.88617,0.82019,0.87695,0.48668,1.52076,8.26148,1.32031,0.00136232,0.00136232,0.00136232
15
+ 14,5826.82,1.51077,0.83545,1.27206,0.87902,0.82907,0.87439,0.48606,1.48204,8.31702,1.29191,0.00133693,0.00133693,0.00133693
16
+ 15,6242.8,1.50465,0.82595,1.27102,0.88299,0.83127,0.88331,0.49249,1.50724,7.10467,1.31589,0.00131154,0.00131154,0.00131154
17
+ 16,6660.76,1.4939,0.80938,1.26095,0.89125,0.83151,0.89092,0.50283,1.48565,9.45108,1.32337,0.00128615,0.00128615,0.00128615
18
+ 17,7078.61,1.48761,0.8017,1.25753,0.88771,0.84979,0.89542,0.5054,1.45685,8.13793,1.28446,0.00126076,0.00126076,0.00126076
19
+ 18,7495.5,1.48686,0.79459,1.2588,0.8949,0.8443,0.89686,0.5023,1.46864,7.05671,1.28017,0.00123538,0.00123538,0.00123538
20
+ 19,7910.69,1.47168,0.78027,1.2491,0.90491,0.84576,0.90022,0.5131,1.44692,8.90855,1.272,0.00120999,0.00120999,0.00120999
21
+ 20,8329.22,1.47008,0.77873,1.24647,0.90058,0.85119,0.90177,0.51425,1.45985,7.07057,1.28536,0.0011846,0.0011846,0.0011846
22
+ 21,8745.93,1.46171,0.7683,1.24354,0.89392,0.85369,0.90515,0.51614,1.43449,8.89426,1.26369,0.00115921,0.00115921,0.00115921
23
+ 22,9160.68,1.46389,0.76362,1.23712,0.90181,0.85503,0.90733,0.51803,1.45356,5.77393,1.27003,0.00113382,0.00113382,0.00113382
24
+ 23,9574.71,1.44836,0.75717,1.23572,0.90458,0.85374,0.90546,0.52039,1.43046,5.95705,1.25964,0.00110843,0.00110843,0.00110843
25
+ 24,9990.04,1.44895,0.75434,1.23721,0.90735,0.86203,0.91221,0.52982,1.41725,7.44996,1.25801,0.00108304,0.00108304,0.00108304
26
+ 25,10403.9,1.43166,0.74203,1.2319,0.90331,0.86436,0.91379,0.5267,1.39922,7.6366,1.24981,0.00105765,0.00105765,0.00105765
27
+ 26,10816.5,1.43754,0.74064,1.22509,0.91249,0.86279,0.91654,0.52886,1.42981,6.21731,1.25763,0.00103226,0.00103226,0.00103226
28
+ 27,11228.5,1.41658,0.72808,1.21639,0.91225,0.86791,0.91525,0.52832,1.41286,6.77402,1.25017,0.00100687,0.00100687,0.00100687
29
+ 28,11641.9,1.41626,0.7165,1.2141,0.90914,0.86191,0.91549,0.5287,1.4082,6.06791,1.24737,0.000981478,0.000981478,0.000981478
30
+ 29,12056.4,1.41408,0.71239,1.2203,0.90652,0.87064,0.91798,0.53635,1.39849,6.19022,1.24031,0.000956089,0.000956089,0.000956089
31
+ 30,12472.4,1.40875,0.7043,1.20592,0.90866,0.8748,0.92067,0.53884,1.40611,6.55112,1.25015,0.000930699,0.000930699,0.000930699
32
+ 31,12888.2,1.40392,0.70164,1.21071,0.90426,0.87378,0.92033,0.53245,1.42493,5.23211,1.26607,0.000905309,0.000905309,0.000905309
33
+ 32,13301.7,1.39348,0.69953,1.2021,0.90511,0.87863,0.92044,0.53689,1.41419,5.92404,1.24944,0.00087992,0.00087992,0.00087992
34
+ 33,13716.9,1.3945,0.69122,1.19624,0.9155,0.86943,0.92286,0.54393,1.3903,4.79925,1.23335,0.00085453,0.00085453,0.00085453
35
+ 34,14135.2,1.39468,0.69059,1.20069,0.91386,0.87446,0.92256,0.54283,1.40012,5.45723,1.23988,0.00082914,0.00082914,0.00082914
36
+ 35,14549.4,1.3814,0.68205,1.20087,0.91474,0.87875,0.92608,0.54304,1.38331,5.19335,1.22407,0.00080375,0.00080375,0.00080375
37
+ 36,14962.9,1.37752,0.67273,1.18567,0.90719,0.88408,0.92689,0.54817,1.37037,5.24723,1.22364,0.000778361,0.000778361,0.000778361
38
+ 37,15380.3,1.36155,0.66455,1.18639,0.91162,0.88571,0.9251,0.54687,1.39275,5.95208,1.24004,0.000752971,0.000752971,0.000752971
39
+ 38,15796.4,1.37144,0.66996,1.18581,0.91233,0.88473,0.92691,0.54951,1.38884,4.90416,1.23288,0.000727581,0.000727581,0.000727581
40
+ 39,16209.9,1.3548,0.66107,1.18919,0.91588,0.88186,0.92487,0.5489,1.39178,6.25481,1.23724,0.000702192,0.000702192,0.000702192
41
+ 40,16622.9,1.35476,0.65003,1.1756,0.91966,0.88194,0.92784,0.54813,1.39846,4.85145,1.24367,0.000676802,0.000676802,0.000676802
42
+ 41,17037,1.33818,0.65152,1.1746,0.91753,0.8879,0.92956,0.55121,1.37909,5.5499,1.23381,0.000651412,0.000651412,0.000651412
43
+ 42,17453.7,1.3482,0.644,1.17567,0.91197,0.89207,0.92989,0.54999,1.38641,4.79643,1.22743,0.000626023,0.000626023,0.000626023
44
+ 43,17867,1.34097,0.64098,1.17679,0.91479,0.8878,0.92733,0.55304,1.38008,5.51721,1.23068,0.000600633,0.000600633,0.000600633
45
+ 44,18284.8,1.32515,0.63356,1.16227,0.91756,0.88754,0.92952,0.55259,1.38325,4.33451,1.22476,0.000575243,0.000575243,0.000575243
46
+ 45,18699.2,1.32723,0.62945,1.15872,0.91107,0.89291,0.92782,0.55525,1.35769,5.26302,1.21758,0.000549854,0.000549854,0.000549854
47
+ 46,19114.7,1.31553,0.62555,1.16599,0.91348,0.8929,0.93068,0.5567,1.37818,3.97822,1.23017,0.000524464,0.000524464,0.000524464
48
+ 47,19530.8,1.32135,0.62524,1.15718,0.91432,0.88956,0.93094,0.55803,1.37465,4.03416,1.22348,0.000499074,0.000499074,0.000499074
49
+ 48,19946.5,1.31266,0.61786,1.15514,0.91793,0.88933,0.93174,0.55797,1.37422,4.11166,1.21816,0.000473684,0.000473684,0.000473684
50
+ 49,20360.6,1.29918,0.61207,1.15091,0.91665,0.88888,0.93011,0.55837,1.38179,4.34154,1.23031,0.000448295,0.000448295,0.000448295
51
+ 50,20779,1.29156,0.60563,1.14818,0.91274,0.89536,0.93308,0.56161,1.3593,4.09332,1.21866,0.000422905,0.000422905,0.000422905
52
+ 51,21197.4,1.28262,0.59898,1.14426,0.91756,0.88948,0.93277,0.55996,1.37129,3.96135,1.22,0.000397515,0.000397515,0.000397515
53
+ 52,21613.7,1.2948,0.60087,1.13477,0.91983,0.88787,0.93326,0.56319,1.36529,4.23322,1.21482,0.000372126,0.000372126,0.000372126
54
+ 53,22032.1,1.28167,0.59862,1.13907,0.91545,0.8904,0.93411,0.56254,1.37412,3.59667,1.22067,0.000346736,0.000346736,0.000346736
55
+ 54,22447.2,1.27234,0.58804,1.13583,0.92025,0.89169,0.93383,0.56312,1.3728,3.7638,1.22111,0.000321346,0.000321346,0.000321346
56
+ 55,22861,1.26801,0.5901,1.1426,0.91661,0.89558,0.93229,0.55996,1.3774,3.96537,1.2248,0.000295957,0.000295957,0.000295957
57
+ 56,23272,1.27931,0.5416,1.15899,0.91757,0.89293,0.93398,0.56245,1.37568,3.95546,1.22112,0.000270567,0.000270567,0.000270567
58
+ 57,23683.4,1.26766,0.53934,1.15566,0.91818,0.89475,0.93649,0.56225,1.36001,3.52243,1.21578,0.000245177,0.000245177,0.000245177
59
+ 58,24093.9,1.25825,0.5277,1.148,0.91632,0.89555,0.93575,0.56493,1.36998,3.27439,1.2216,0.000219788,0.000219788,0.000219788
60
+ 59,24510.8,1.2514,0.52315,1.14515,0.92381,0.89124,0.9367,0.56514,1.37431,3.31534,1.22143,0.000194398,0.000194398,0.000194398
61
+ 60,24928,1.24044,0.52045,1.13895,0.92005,0.89104,0.9366,0.5659,1.36337,3.28065,1.22353,0.000169008,0.000169008,0.000169008
62
+ 61,25344.2,1.23896,0.51676,1.14096,0.91974,0.89483,0.93676,0.56624,1.38409,3.09404,1.23167,0.000143618,0.000143618,0.000143618
63
+ 62,25760.9,1.22985,0.51493,1.13532,0.91702,0.89687,0.93706,0.56638,1.377,2.91451,1.22841,0.000118229,0.000118229,0.000118229
64
+ 63,26178.4,1.22592,0.51087,1.13185,0.9241,0.89015,0.9373,0.56567,1.37699,3.24137,1.22688,9.28391e-05,9.28391e-05,9.28391e-05
65
+ 64,26592.7,1.21817,0.50486,1.12694,0.91728,0.89808,0.93738,0.56605,1.38087,2.86081,1.23321,6.74494e-05,6.74494e-05,6.74494e-05
66
+ 65,27007.6,1.21098,0.50075,1.12596,0.92228,0.89279,0.9375,0.56717,1.37967,2.77338,1.23488,4.20597e-05,4.20597e-05,4.20597e-05
v1/train-val/results.png ADDED
v1/train-val/train_batch0.jpg ADDED
v1/train-val/train_batch1.jpg ADDED
v1/train-val/train_batch2.jpg ADDED
v1/train-val/train_batch27555.jpg ADDED
v1/train-val/train_batch27556.jpg ADDED
v1/train-val/train_batch27557.jpg ADDED
v1/train-val/val_batch0_labels.jpg ADDED
v1/train-val/val_batch0_pred.jpg ADDED
v1/train-val/val_batch1_labels.jpg ADDED
v1/train-val/val_batch1_pred.jpg ADDED
v1/train-val/val_batch2_labels.jpg ADDED
v1/train-val/val_batch2_pred.jpg ADDED
v2/model/pytorch/best.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f304fbab2cbfbb26575146980ef4eedb7a678ea1d0b3379ccc2a8e2ff77541b
3
+ size 114398418
v2/test/F1_curve.png ADDED