Bad results with onnx-model

#1
by bankholdup - opened

Hi!

I made my own programm with onnx-inference with your model, and I get very bad results.

For example, I get high confidence for helmets (around 0.95) for caps.

Could you make a huggingface space or smth to compare results.

Qualcomm org

Hi there.

Did you try the quantized model or float model?

Qualcomm org

Could you also confirm what detection threshold you’re targeting?

I've tried both of them (quantized and float) with same results.

Threshold was 0.95.

Attach the result
Без имени.png

Here are a couple of recommendations from the team that trained the model:

  1. Set the threshold to 0.995.
  2. The model is trained on cropped person image patched for two classes - helmet and safety vest. You must be waiting a helmet or safety vest for good accuracy.
  3. In order to use the PPE detector properly, a person detector needs to run first to provide a full person image, and cropping the person image patch, and then run the PPE detector .

If you'd like a general face detection pipeline (not specific to Personal Protective Equipment (PPE)), I recommend you look at https://huggingface.co/qualcomm/MediaPipe-Face-Detection.
The mediapipe model consists of 2 parts: a face detector and a "landmark generator" for face features, like your eyes / mouth. If you only need face detection, you could just use the Face Detector model.

Sign up or log in to comment