satrn

original repo

Convert tools links:

For those who are interested in model conversion, you can try to export onnx or axmodel through

satrn.axera

Installation

conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate open-mmlab
pip3 install openmim
git clone https://github.com/open-mmlab/mmocr.git
cd mmocr
mim install -e .

Support Platform

The speed measurements(under different NPU configurations ) of the two parts of SATRN:

(1) backbone+encoder

(2) decoder

backbone+encoder(ms) decoder(ms)
NPU1 20.494 2.648
NPU2 9.785 1.504
NPU3 6.085 1.384

How to use

Download all files from this repository to the device

.
β”œβ”€β”€ axmodel
β”‚   β”œβ”€β”€ backbone_encoder.axmodel
β”‚   └── decoder.axmodel
β”œβ”€β”€ demo_text_recog.jpg
β”œβ”€β”€ onnx
β”‚   β”œβ”€β”€ satrn_backbone_encoder.onnx
β”‚   └── satrn_decoder_sim.onnx
β”œβ”€β”€ README.md
β”œβ”€β”€ run_axmodel.py
β”œβ”€β”€ run_model.py
└── run_onnx.py

python env requirement

1. pyaxengine

https://github.com/AXERA-TECH/pyaxengine

wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.1rc0/axengine-0.1.1-py3-none-any.whl
pip install axengine-0.1.1-py3-none-any.whl

2. satrn

satrn installation

Inference onnxmodel

python run_onnx.py

input:

output:

pred_text: STAR
score: [0.9384028315544128, 0.9574984908103943, 0.9993689656257629, 0.9994958639144897]

Inference with AX650 Host

check the reference for more information

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support