Spaces:
Sleeping
A newer version of the Gradio SDK is available:
5.29.0
title: Food Weight Benchmark
emoji: 🥇
colorFrom: green
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: true
license: cc-by-nc-4.0
short_description: Food detection and weight prediction benchmark
sdk_version: 5.19.0
Configuration
This leaderboard is designed to evaluate CSV submissions containing predictions for object detection and food weight estimation.
Submission Format
Submissions should be provided as CSV files and must include the following columns:
image_id, xmin, ymin, xmax, ymax, weight
- image_id: Unique identifier for the image.
- xmin, ymin, xmax, ymax: Coordinates for the predicted bounding box.
- weight: Predicted food weight in grams.
Evaluation Process
Ground Truth:
The hidden ground truth labels are stored in a CSV file located atdata/ground_truth.csv
and should have the same column format as above.Evaluation Steps:
- The application uploads the user’s submission CSV file.
- It merges the submission with the ground truth data based on
image_id
. - Computes the Intersection over Union (IoU) for the bounding boxes.
- Computes the absolute error for food weight predictions.
- Calculates overall metrics such as:
- Mean IoU
- Mean Weight Error (in grams)
- Combined Score (by default, defined as:
mean_iou - (mean_weight_error / 100.0)
— adjust as needed)
Leaderboard Persistence:
Evaluation results are stored in a CSV file (default filename:evaluation_results.csv
) with the following columns:
submission_id, mean_iou, mean_weight_error, combined_score
- submission_id: A unique timestamp-based identifier for each submission.
- mean_iou: Average IoU calculated across all predictions.
- mean_weight_error: Average absolute error (in grams) between predicted and true weights.
- combined_score: A custom score that reflects both detection quality and weight prediction accuracy.
Restarting the Space
If you encounter any issues (for example, if the evaluation queues or result folders become problematic), please restart the space. This action will clear the directories such as eval-queue
, eval-queue-bk
, eval-results
, and eval-results-bk
.
Code Logic for Advanced Configuration
For more complex edits or customizations, consider the following:
Evaluation Logic:
The functions for reading the CSV submissions, computing IoU, and weight error are defined inapp.py
. Adjust these functions if you need to change how metrics are computed.Leaderboard Persistence:
The leaderboard is maintained by appending evaluation results to theevaluation_results.csv
file. You can modify this behavior inapp.py
if you wish to use a different persistence method.User Interface:
The Gradio interface (submission upload, leaderboard refresh, etc.) is also implemented inapp.py
. You can enhance or simplify the UI by modifying this file.
This README should help you configure and customize your leaderboard for object detection and food weight evaluation. Feel free to adjust any sections to better fit your project needs.