satrn
Convert tools links:
For those who are interested in model conversion, you can try to export onnx or axmodel through
Installation
conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate open-mmlab
pip3 install openmim
git clone https://github.com/open-mmlab/mmocr.git
cd mmocr
mim install -e .
Support Platform
The speed measurements(under different NPU configurations ) of the two parts of SATRN:
(1) backbone+encoder
(2) decoder
backbone+encoder(ms) | decoder(ms) | |
---|---|---|
NPU1 | 20.494 | 2.648 |
NPU2 | 9.785 | 1.504 |
NPU3 | 6.085 | 1.384 |
How to use
Download all files from this repository to the device
.
βββ axmodel
β βββ backbone_encoder.axmodel
β βββ decoder.axmodel
βββ demo_text_recog.jpg
βββ onnx
β βββ satrn_backbone_encoder.onnx
β βββ satrn_decoder_sim.onnx
βββ README.md
βββ run_axmodel.py
βββ run_model.py
βββ run_onnx.py
python env requirement
1. pyaxengine
https://github.com/AXERA-TECH/pyaxengine
wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.1rc0/axengine-0.1.1-py3-none-any.whl
pip install axengine-0.1.1-py3-none-any.whl
2. satrn
Inference onnxmodel
python run_onnx.py
input:
output:
pred_text: STAR
score: [0.9384028315544128, 0.9574984908103943, 0.9993689656257629, 0.9994958639144897]
Inference with AX650 Host
check the reference for more information
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support