What is the most appropriate inference time metric if input image sizes are so different?

#4
by Ouz-G - opened

Input image sizes vary a lot so FLOPS is not really useful for comparing inference time of different models in an apples-to-apples manner. What would be the indicator that correlates most with let's say single batch inference time?

Is img_size training input size or inference? Whole table is a bit confusing.

@rwightman ?

PyTorch Image Models org

@Ouz-G Train values are not included, it's inference image size. Higher input size yields higher accuracy, so it's just another model scaling parameter / design tradeoff. step time is the latency for one batch, but these runs are done in batched mode with retry on failure so 1 / throughput gives you a latency value that's been normalized by batch size.

Sign up or log in to comment