Dragon Query Encoder
This is the query encoder of the Dragon dual-encoder retrieval model, trained for dense passage retrieval tasks.
It should be used together with the corresponding Dragon Context Encoder for end-to-end retrieval.
Model Architecture
Base model:
bert-base-uncased
Architecture: Dense Passage Retriever (DPR) dual-encoder
Encoder type: Context encoder (for passages)
Pooling method: CLS pooling (take
[CLS]
token representation)Checkpoint origin:
The weights were converted from the official facebookresearch/dpr-scale Dragon implementation,
specifically from the checkpoint provided at:
https://dl.fbaipublicfiles.com/dragon/checkpoints/DRAGON/checkpoint_best.ckpt
Usage Example
from transformers import AutoTokenizer, AutoModel
import torch
# Load query encoder
q_tokenizer = AutoTokenizer.from_pretrained("liyongkang/dragon-query-encoder")
q_model = AutoModel.from_pretrained("liyongkang/dragon-query-encoder")
# Load context encoder
p_tokenizer = AutoTokenizer.from_pretrained("liyongkang/dragon-context-encoder")
p_model = AutoModel.from_pretrained("liyongkang/dragon-context-encoder")
query = "What is Dragon in NLP?"
passage = "A dual-encoder retrieval model for dense passage retrieval."
# Tokenize. In fact, the two tokenizers are the same.
q_inputs = q_tokenizer(query, return_tensors="pt", truncation=True, padding=True)
p_inputs = p_tokenizer(passage, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
q_vec = q_model(**q_inputs).last_hidden_state[:, 0] # CLS pooling
p_vec = p_model(**p_inputs).last_hidden_state[:, 0] # CLS pooling
score = (q_vec * p_vec).sum(dim=-1)
print("Dot product similarity:", score.item())
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for liyongkang/dragon-query-encoder
Base model
google-bert/bert-base-uncased