Depth-Anything-V2

This version of Depth-Anything-V2 has been converted to run on the Axera NPU using w8a16 quantization.

This model has been optimized with the following LoRA:

Compatible with Pulsar2 version: 3.4

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through

Support Platform

Chips Time
AX650 33 ms
AX630C 310 ms

How to use

Download all files from this repository to the device

root@ax650:~/AXERA-TECH/Depth-Anything-V2# tree
.
|-- README.md
|-- calib-cocotest2017.tar
|-- config.json
|-- depth_anything_v2_vits.onnx
|-- depth_anything_v2_vits_ax620e.axmodel
|-- depth_anything_v2_vits_ax650.axmodel
|-- examples
|   |-- demo01.jpg
....
|   `-- demo20.jpg
|-- output-ax.png
`-- python
    |-- infer.py
    |-- infer_onnx.py
    |-- output.png
    `-- requirements.txt

2 directories, 31 files
root@ax650:~/AXERA-TECH/Depth-Anything-V2#

python env requirement

pyaxengine

https://github.com/AXERA-TECH/pyaxengine

wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc1/axengine-0.1.3-py3-none-any.whl
pip install axengine-0.1.3-py3-none-any.whl

others

Maybe None.

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)

Input image:

root@ax650:~/AXERA-TECH/Depth-Anything-V2# python3 python/infer.py --model depth_anything_v2_vits_ax650.axmodel --img examples/demo01.jpg
[INFO] Available providers:  ['AxEngineExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.12.0s
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 3.3 ae03a08f
root@ax650:~/AXERA-TECH/Depth-Anything-V2# ls

Output image:

Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AXERA-TECH/Depth-Anything-V2

Quantized
(3)
this model

Collection including AXERA-TECH/Depth-Anything-V2