Add/update the quantized ONNX model files and README.md for Transformers.js v3

#1
by whitphx HF Staff - opened

Applied Quantizations

βœ… Based on model.onnx with slimming

↳ ❌ int8: model_int8.onnx (added but JS-based E2E test failed)

/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25
            __classPrivateFieldGet(this, _OnnxruntimeSessionHandler_inferenceSession, "f").loadModel(pathOrBuffer, options);
                                                                                           ^

Error: Could not find an implementation for ConvInteger(10) node with name '/resnet/embedder/embedder/convolution/Conv_quant'
    at new OnnxruntimeSessionHandler (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:25:92)
    at Immediate.<anonymous> (/home/ubuntu/src/tjsmigration/node_modules/.pnpm/[email protected]/node_modules/onnxruntime-node/dist/backend.js:67:29)
    at process.processImmediate (node:internal/timers:485:21)

Node.js v22.16.0

↳ βœ… uint8: model_uint8.onnx (added)
↳ βœ… q4: model_q4.onnx (added)
↳ βœ… q4f16: model_q4f16.onnx (added)
↳ βœ… bnb4: model_bnb4.onnx (added)

Xenova changed pull request status to merged

Sign up or log in to comment