Commit History
Increase logging level to verbose on onnx to engine script
3deadf3
trim sample video to keep only important part
f5e3a24
luigi
commited on
Make 2 distinct ONNX2TRT conversion scripts, one for JetPack 4.6, the other for JetPack 5.1
06f3435
adjust visualisation style, allow draw_bbox() to show person id
a9e21bc
Add method to free buffers
b79a8af
Implement pycuda backend for inference with TensorRT engine
889281f
Add 'get_model_format_and_input_shape' helper function
2f2f685
Luigi
commited on
Return and show bounding box confidence
634d4ff
Deal with multiple cameras
98d66d4
Luigi
commited on
Show bounding box on screen too
0bf1eb7
Luigi
commited on
Internalize input frame buffer
846e714
Luigi
commited on
Unify batched and non-batched versions
42892de
Luigi
commited on
Show FPS on demo video
15801f5
Support Inference over batch with TensorRT Engine Model
0cdc9a7
Use dynamic batch size by default
09ccc6e
Add script used to convert ONNX to fp32/fp16/int8/mixed engine
92676db
Use ONNXruntime instead of ONNX.checker.check_model to detect ONNX model
57a8c6b
Resize in keeping aspect ratio in visualization
35b2a45
fix postprocess_batch method in RTMO_GPU_Batch class
6008f96
Add model format (ONNX/Engine) & input size detection (based on file header, not on filename) for RTMO
58a44cf
Add support TensorRT engine support for RTMO
bbf20b6
Re-quantize models in FP16 in keep positional encoding in FP32 to avoid bad accuracy
f9a6075
verified
Improve Debug Message Layout in Validate Funciton
e8f08e3
Add appropriate pre- to input image and postprocessing to validate function
6dfe441
Validate only on pose result, add visual check
b58a63b
Add comparison between joint coordinated from original and converted models
ab55aa6
Add 'rtol', 'atol' arguments to control tolerance in model validation
684ebde
Fix Input Type Error raised in infer()
18eb8e3
Add utility to convert ONNX model in FP32/16 mixed precision
9f60e86
Load ./libmmdeploy_tensorrt_ops.so for TensorRT EP if available
1c5dc58
Remove fix_batch_dimension_all.sh
01042ef
Remove models with fixed batch size as dynamic batch is generally supported
ae1686f
Apply constant folding
d474b05
Reperform shape inference on all models
e7b68a5
Bugfix: correct batch dimension fixation then regenerate ONNX models
e4e03fd
Luigi
commited on