🇸🇦 arabic-english-bge-m3
This model is a 36.2% smaller version of BAAI/bge-m3 for the Arabic language.
The ONNX quantized version is approximately 75% smaller (363 MB) than the pruned model, while retaining approximately 98% of the original model's quality.
This pruned model should perform similarly to the original model for Arabic language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in Arabic were removed from the original multilingual model's vocabulary.
Usage
You can use this model with the Transformers library:
from transformers import AutoModel, AutoTokenizer
model_name = "sayed0am/arabic-english-bge-m3"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
Or with the sentence-transformers library:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("sayed0am/arabic-english-bge-m3")
Using it with OpenAI embedding endpoint using infinity
#!/bin/bash
port=8000
model=sayed0am/arabic-english-bge-m3
volume=$PWD/data
docker run -it \
-v $volume:/app/.cache \
-p $port:$port \
michaelf34/infinity:latest-cpu \
v2 \
--engine optimum \
--model-id $model \
--port $port \
--url-prefix v1 \
--api-key sk-123
Using ONNX
# pip install huggingface-hub
from huggingface_hub import snapshot_download
snapshot_download(repo_id="sayed0am/arabic-english-bge-m3",local_dir="arabic-english-bge-m3")
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
import torch
# Make sure that you download the model weights locally to `bge-m3-onnx`
model = ORTModelForFeatureExtraction.from_pretrained("arabic-english-bge-m3", subfolder="onnx", provider="CUDAExecutionProvider") # omit provider for CPU usage.
tokenizer = AutoTokenizer.from_pretrained("arabic-english-bge-m3")
sentences = [
"English: The quick brown fox jumps over the lazy dog.",
"Arabic: الثعلب البني السريع يقفز فوق الكلب الكسول."
]
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt').to("cuda") # For CPU remove .to("cuda")
# Get the embeddings
out=model(**encoded_input,return_dict=True).last_hidden_state
# normalize the embeddings
dense_vecs = torch.nn.functional.normalize(out[:, 0], dim=-1)
- Downloads last month
- 410