Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,9 @@ This model is compatible with Jetson Orin Nano hardware.
|
|
14 |
|
15 |
Note that all quantization that has been introduced in the conversion is purely static, meaning that the corresponding model has potentillay bad accuracy compared to the original one.
|
16 |
|
|
|
|
|
|
|
17 |
|
18 |
# Large
|
19 |
|
|
|
14 |
|
15 |
Note that all quantization that has been introduced in the conversion is purely static, meaning that the corresponding model has potentillay bad accuracy compared to the original one.
|
16 |
|
17 |
+
Todo: use [coco-pose-2017](https://huggingface.co/datasets/Mai0313/coco-pose-2017) dataset to calibrate int8 model
|
18 |
+
|
19 |
+
More information on calibration for post-training quantization, check [this slide](https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)
|
20 |
|
21 |
# Large
|
22 |
|