Feature Extraction
sentence-transformers
ONNX
Transformers
fastText
sentence-embeddings
sentence-similarity
semantic-search
vector-search
retrieval-augmented-generation
multilingual
cross-lingual
low-resource
merged-model
combined-model
tokenizer-embedded
tokenizer-integrated
standalone
all-in-one
quantized
int8
int8-quantization
optimized
efficient
fast-inference
low-latency
lightweight
small-model
edge-ready
arm64
edge-device
mobile-device
on-device
mobile-inference
tablet
smartphone
embedded-ai
onnx-runtime
onnx-model
MiniLM
MiniLM-L12-v2
paraphrase
usecase-ready
plug-and-play
production-ready
deployment-ready
real-time
distiluse
Update README.md
Browse files
README.md
CHANGED
@@ -97,4 +97,15 @@ session = ort.InferenceSession(
|
|
97 |
|
98 |
input_feed = {"text": np.asarray(['something..'])}
|
99 |
outputs = session.run(None, input_feed)
|
100 |
-
embedding = outputs[0]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
|
98 |
input_feed = {"text": np.asarray(['something..'])}
|
99 |
outputs = session.run(None, input_feed)
|
100 |
+
embedding = outputs[0]
|
101 |
+
```
|
102 |
+
|
103 |
+
---
|
104 |
+
|
105 |
+
## 🐍 JS Example
|
106 |
+
```JavaScript
|
107 |
+
const session = await InferenceSession.create(EMBEDDING_FULL_MODEL_PATH);
|
108 |
+
const inputTensor = new Tensor('string', ['something..'], [1]);
|
109 |
+
const feeds = { text: inputTensor };
|
110 |
+
const outputMap = await session.run(feeds);
|
111 |
+
const embedding = outputMap.text_embedding.data;
|