metadata
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mlx
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
mlx-community/all-MiniLM-L6-v2-bf16
The Model mlx-community/all-MiniLM-L6-v2-bf16 was converted to MLX format from sentence-transformers/all-MiniLM-L6-v2 using mlx-lm version 0.0.3.
Use with mlx
pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx
model, tokenizer = load("mlx-community/all-MiniLM-L6-v2-bf16")
# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds # Normalized embeddings
# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)
print("Similarity matrix between texts:")
print(similarity_matrix)