init commit
Browse files- .gitattributes +1 -0
- README.md +113 -0
- added_tokens.json +16 -0
- chat_template.json +3 -0
- config.json +45 -0
- generation_config.json +14 -0
- merges.txt +0 -0
- model.safetensors +3 -0
- modeling.py +221 -0
- preprocessor_config.json +29 -0
- special_tokens_map.json +31 -0
- tokenizer.json +3 -0
- tokenizer_config.json +145 -0
- vocab.json +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
pipeline_tag: text-classification
|
4 |
+
tags:
|
5 |
+
- vidore
|
6 |
+
- reranker
|
7 |
+
- qwen2_vl
|
8 |
+
language:
|
9 |
+
- multilingual
|
10 |
+
base_model:
|
11 |
+
- Qwen/Qwen2-VL-2B-Instruct
|
12 |
+
inference: false
|
13 |
+
license: cc-by-nc-4.0
|
14 |
+
library_name: transformers
|
15 |
+
---
|
16 |
+
|
17 |
+
<br><br>
|
18 |
+
|
19 |
+
<p align="center">
|
20 |
+
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
|
21 |
+
</p>
|
22 |
+
|
23 |
+
<p align="center">
|
24 |
+
<b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
|
25 |
+
</p>
|
26 |
+
|
27 |
+
# jina-reranker-v3
|
28 |
+
|
29 |
+
## Intended Usage & Model Info
|
30 |
+
|
31 |
+
The **Jina Reranker v3** (`jina-reranker-v3`) is multi-lingual, and multi-modal model that has been fine-tuned for text and visual document reranking task, which is a crucial component in many information retrieval systems. It takes a query and a document pair as input and outputs a score indicating the relevance of the document to the query. The model is trained on a large dataset of query-document pairs and is capable of reranking documents in multiple languages with high accuracy.
|
32 |
+
|
33 |
+
|
34 |
+
# Usage
|
35 |
+
|
36 |
+
_This model repository is licenced for research and evaluation purposes under CC-BY-NC-4.0. For commercial usage, please refer to Jina AI's APIs, AWS Sagemaker or Azure Marketplace offerings. Please [contact us](https://jina.ai/contact-sales) for any further clarifications._
|
37 |
+
1. The easiest way to use `jina-reranker-v3` is to call Jina AI's [Reranker API](https://jina.ai/reranker/).
|
38 |
+
|
39 |
+
```bash
|
40 |
+
curl https://api.jina.ai/v1/rerank \
|
41 |
+
-H "Content-Type: application/json" \
|
42 |
+
-H "Authorization: Bearer YOUR_API_KEY" \
|
43 |
+
-d '{
|
44 |
+
"model": "jina-reranker-v3",
|
45 |
+
"query": "Organic skincare products for sensitive skin",
|
46 |
+
"documents": [
|
47 |
+
{"text": "Organic skincare for sensitive skin with aloe vera and chamomile."},
|
48 |
+
{"text": "New makeup trends focus on bold colors and innovative techniques"},
|
49 |
+
{"text": "Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille"},
|
50 |
+
{"text": "Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken"},
|
51 |
+
{"text": "Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla"},
|
52 |
+
{"text": "Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras"},
|
53 |
+
{"text": "针对敏感肌专门设计的天然有机护肤产品"},
|
54 |
+
{"text": "新的化妆趋势注重鲜艳的颜色和创新的技巧"},
|
55 |
+
{"text": "敏感肌のために特別に設計された天然有機スキンケア製品"},
|
56 |
+
{"text": "新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています"}
|
57 |
+
|
58 |
+
],
|
59 |
+
"top_n": 3
|
60 |
+
}'
|
61 |
+
```
|
62 |
+
|
63 |
+
2. You can also use the `transformers` library to interact with the model programmatically.
|
64 |
+
|
65 |
+
Before you start, install the `transformers` libraries:
|
66 |
+
|
67 |
+
```bash
|
68 |
+
pip install transformers >= 4.47.3
|
69 |
+
```
|
70 |
+
|
71 |
+
And then:
|
72 |
+
```python
|
73 |
+
from transformers import AutoModel
|
74 |
+
|
75 |
+
model = AutoModel.from_pretrained(
|
76 |
+
'jinaai/jina-reranker-v3',
|
77 |
+
torch_dtype="auto",
|
78 |
+
trust_remote_code=True,
|
79 |
+
)
|
80 |
+
|
81 |
+
model.to('cuda') # or 'cpu' if no GPU is available
|
82 |
+
model.eval()
|
83 |
+
|
84 |
+
# Example query and documents
|
85 |
+
query = "Organic skincare products for sensitive skin"
|
86 |
+
documents = [
|
87 |
+
"Organic skincare for sensitive skin with aloe vera and chamomile.",
|
88 |
+
"New makeup trends focus on bold colors and innovative techniques",
|
89 |
+
"Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille",
|
90 |
+
"Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken",
|
91 |
+
"Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla",
|
92 |
+
"Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras",
|
93 |
+
"针对敏感肌专门设计的天然有机护肤产品",
|
94 |
+
"新的化妆趋势注重鲜艳的颜色和创新的技巧",
|
95 |
+
"敏感肌のために特別に設計された天然有機スキンケア製品",
|
96 |
+
"新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています",
|
97 |
+
]
|
98 |
+
|
99 |
+
# construct sentence pairs
|
100 |
+
sentence_pairs = [[query, doc] for doc in documents]
|
101 |
+
|
102 |
+
scores = model.compute_score(sentence_pairs, max_length=1024)
|
103 |
+
```
|
104 |
+
|
105 |
+
The scores will be a list of floats, where each float represents the relevance score of the corresponding document to the query. Higher scores indicate higher relevance.
|
106 |
+
For instance the returning scores in this case will be:
|
107 |
+
```bash
|
108 |
+
[0.8311430811882019, 0.09401018172502518,
|
109 |
+
0.6334102749824524, 0.08269733935594559,
|
110 |
+
0.7620701193809509, 0.09947021305561066,
|
111 |
+
0.9263036847114563, 0.05834583938121796,
|
112 |
+
0.8418256044387817, 0.11124119907617569]
|
113 |
+
```
|
added_tokens.json
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"<|box_end|>": 151649,
|
3 |
+
"<|box_start|>": 151648,
|
4 |
+
"<|endoftext|>": 151643,
|
5 |
+
"<|im_end|>": 151645,
|
6 |
+
"<|im_start|>": 151644,
|
7 |
+
"<|image_pad|>": 151655,
|
8 |
+
"<|object_ref_end|>": 151647,
|
9 |
+
"<|object_ref_start|>": 151646,
|
10 |
+
"<|quad_end|>": 151651,
|
11 |
+
"<|quad_start|>": 151650,
|
12 |
+
"<|video_pad|>": 151656,
|
13 |
+
"<|vision_end|>": 151653,
|
14 |
+
"<|vision_pad|>": 151654,
|
15 |
+
"<|vision_start|>": 151652
|
16 |
+
}
|
chat_template.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
|
3 |
+
}
|
config.json
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "jinaai/jina-reranker-v3",
|
3 |
+
"architectures": ["JinaVLForRanking"],
|
4 |
+
"auto_map": {
|
5 |
+
"AutoModel": "modeling.JinaVLForRanking"
|
6 |
+
},
|
7 |
+
"attention_dropout": 0.0,
|
8 |
+
"bos_token_id": 151643,
|
9 |
+
"eos_token_id": 151645,
|
10 |
+
"hidden_act": "silu",
|
11 |
+
"hidden_size": 1536,
|
12 |
+
"image_token_id": 151655,
|
13 |
+
"initializer_range": 0.02,
|
14 |
+
"intermediate_size": 8960,
|
15 |
+
"max_position_embeddings": 32768,
|
16 |
+
"max_window_layers": 28,
|
17 |
+
"model_type": "qwen2_vl",
|
18 |
+
"num_attention_heads": 12,
|
19 |
+
"num_hidden_layers": 28,
|
20 |
+
"num_key_value_heads": 2,
|
21 |
+
"rms_norm_eps": 1e-6,
|
22 |
+
"rope_scaling": {
|
23 |
+
"mrope_section": [16, 24, 24],
|
24 |
+
"rope_type": "default",
|
25 |
+
"type": "default"
|
26 |
+
},
|
27 |
+
"rope_theta": 1000000.0,
|
28 |
+
"sliding_window": 32768,
|
29 |
+
"tie_word_embeddings": true,
|
30 |
+
"torch_dtype": "bfloat16",
|
31 |
+
"transformers_version": "4.47.1",
|
32 |
+
"use_cache": false,
|
33 |
+
"use_sliding_window": false,
|
34 |
+
"video_token_id": 151656,
|
35 |
+
"vision_config": {
|
36 |
+
"hidden_size": 1536,
|
37 |
+
"in_chans": 3,
|
38 |
+
"model_type": "qwen2_vl",
|
39 |
+
"spatial_patch_size": 14
|
40 |
+
},
|
41 |
+
"vision_end_token_id": 151653,
|
42 |
+
"vision_start_token_id": 151652,
|
43 |
+
"vision_token_id": 151654,
|
44 |
+
"vocab_size": 151936
|
45 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"attn_implementation": "flash_attention_2",
|
3 |
+
"bos_token_id": 151643,
|
4 |
+
"do_sample": true,
|
5 |
+
"eos_token_id": [
|
6 |
+
151645,
|
7 |
+
151643
|
8 |
+
],
|
9 |
+
"pad_token_id": 151643,
|
10 |
+
"temperature": 0.01,
|
11 |
+
"top_k": 1,
|
12 |
+
"top_p": 0.001,
|
13 |
+
"transformers_version": "4.47.1"
|
14 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1d0a7b5fd0966512850481633159f357450dc738665870c9ac4f2b2da252f5e2
|
3 |
+
size 4889523546
|
modeling.py
ADDED
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
from torch import nn
|
3 |
+
from typing import Optional, Tuple, List, Union, Any
|
4 |
+
from transformers import Qwen2VLForConditionalGeneration
|
5 |
+
import logging
|
6 |
+
import warnings
|
7 |
+
from PIL import Image
|
8 |
+
from transformers.image_utils import load_image
|
9 |
+
|
10 |
+
logger = logging.getLogger(__name__)
|
11 |
+
|
12 |
+
|
13 |
+
def load_images(images, lazy_load: bool = True):
|
14 |
+
# Disable PIL DecompositionBomb threshold for reading large images.
|
15 |
+
pil_max_px = Image.MAX_IMAGE_PIXELS
|
16 |
+
Image.MAX_IMAGE_PIXELS = None
|
17 |
+
|
18 |
+
images_batch = []
|
19 |
+
for image in images:
|
20 |
+
if isinstance(image, Image.Image):
|
21 |
+
images_batch.append(image)
|
22 |
+
else:
|
23 |
+
pil_image = load_image(image)
|
24 |
+
if lazy_load:
|
25 |
+
images_batch.append(pil_image)
|
26 |
+
else:
|
27 |
+
# avoid Too many open files error
|
28 |
+
images_batch.append(pil_image.copy())
|
29 |
+
pil_image.close()
|
30 |
+
Image.MAX_IMAGE_PIXELS = pil_max_px
|
31 |
+
|
32 |
+
return images_batch
|
33 |
+
|
34 |
+
|
35 |
+
def formatting_prompts_func(
|
36 |
+
query: str,
|
37 |
+
doc: str,
|
38 |
+
query_type: str = 'text',
|
39 |
+
doc_type: str = 'text',
|
40 |
+
prefix_str: str = '',
|
41 |
+
) -> str:
|
42 |
+
"""
|
43 |
+
Format prompts for different combinations of query and content types.
|
44 |
+
|
45 |
+
Args:
|
46 |
+
query: Query text or image path
|
47 |
+
doc: Content text or image path
|
48 |
+
query_type: Whether query is an image
|
49 |
+
doc_type: Whether content is an image
|
50 |
+
prefix_str: Optional prefix string to add
|
51 |
+
"""
|
52 |
+
# Format query part
|
53 |
+
if query_type == 'image':
|
54 |
+
query_part = "**Query**:\n<|vision_start|><|image_pad|><|vision_end|>"
|
55 |
+
else:
|
56 |
+
query_part = f"**Query**:\n{query}"
|
57 |
+
|
58 |
+
# Format content part
|
59 |
+
if doc_type == 'image':
|
60 |
+
doc_part = "**Document**:\n<|vision_start|><|image_pad|><|vision_end|>"
|
61 |
+
else:
|
62 |
+
doc_part = f"**Document**:\n{doc}"
|
63 |
+
|
64 |
+
# Combine parts
|
65 |
+
prompt = doc_part + '\n' + query_part
|
66 |
+
|
67 |
+
# Add prefix if provided
|
68 |
+
if prefix_str:
|
69 |
+
prompt = prefix_str + '\n' + prompt
|
70 |
+
|
71 |
+
return prompt
|
72 |
+
|
73 |
+
class JinaVLForRanking(Qwen2VLForConditionalGeneration):
|
74 |
+
def __init__(self, config):
|
75 |
+
super().__init__(config)
|
76 |
+
|
77 |
+
self.padding_side = "left"
|
78 |
+
self.num_labels = 1 # config.num_labels
|
79 |
+
|
80 |
+
# hack the lm_head to do nothing, since we only want the hidden states
|
81 |
+
self.lm_head = nn.Identity()
|
82 |
+
|
83 |
+
# copy the idea from `Qwen2ForRewardModel` to have a MLP layer to get the final score
|
84 |
+
self.score = nn.Sequential(
|
85 |
+
nn.Linear(config.hidden_size, config.hidden_size),
|
86 |
+
nn.ReLU(),
|
87 |
+
nn.Linear(config.hidden_size, self.num_labels),
|
88 |
+
)
|
89 |
+
|
90 |
+
# Initialize weights and apply final processing
|
91 |
+
self.post_init()
|
92 |
+
|
93 |
+
self.score_token_id = 100
|
94 |
+
|
95 |
+
def forward(self, *args, **kwargs) -> torch.Tensor:
|
96 |
+
# Delete output_hidden_states from kwargs
|
97 |
+
kwargs.pop("output_hidden_states", None)
|
98 |
+
kwargs.pop("use_cache", None)
|
99 |
+
assert kwargs.pop("labels", None) is None, "labels should not be passed to forward()"
|
100 |
+
|
101 |
+
outputs = super().forward(
|
102 |
+
*args,
|
103 |
+
use_cache=False,
|
104 |
+
output_hidden_states=True,
|
105 |
+
**kwargs,
|
106 |
+
)
|
107 |
+
|
108 |
+
# get the hidden states of the last layer
|
109 |
+
hidden_states = outputs.hidden_states[-1]
|
110 |
+
|
111 |
+
# IMPORTANT: the padding token must be on the left side
|
112 |
+
# get the hidden states of the last token and apply the linear layer
|
113 |
+
pooled_logits = self.score(hidden_states[:, -1])
|
114 |
+
|
115 |
+
return pooled_logits.squeeze(-1)
|
116 |
+
|
117 |
+
@torch.no_grad()
|
118 |
+
def compute_score(
|
119 |
+
self,
|
120 |
+
pairs: Union[List[Tuple[str, str]], Tuple[str, str]],
|
121 |
+
batch_size: int = 8,
|
122 |
+
max_length: int = 8192,
|
123 |
+
max_query_length: int = 512,
|
124 |
+
max_doc_length: Optional[int] = None,
|
125 |
+
query_type: str = 'text',
|
126 |
+
doc_type: str = 'text',
|
127 |
+
show_progress: bool = False,
|
128 |
+
) -> List[float]:
|
129 |
+
|
130 |
+
if not hasattr(self, "_processor"):
|
131 |
+
from transformers import AutoProcessor
|
132 |
+
self._processor = AutoProcessor.from_pretrained(self.name_or_path, trust_remote_code=True)
|
133 |
+
|
134 |
+
assert isinstance(pairs, list)
|
135 |
+
|
136 |
+
if isinstance(pairs[0], str):
|
137 |
+
pairs = [pairs]
|
138 |
+
|
139 |
+
max_length = max_length or self.config.max_length
|
140 |
+
|
141 |
+
if max_doc_length is None:
|
142 |
+
max_doc_length = max(max_length - max_query_length, max_query_length)
|
143 |
+
|
144 |
+
if max_doc_length < max_query_length:
|
145 |
+
warnings.warn(
|
146 |
+
f"max_doc_length={max_doc_length} should be greater than max_query_length={max_query_length}"
|
147 |
+
)
|
148 |
+
|
149 |
+
assert (
|
150 |
+
max_doc_length + max_query_length <= max_length
|
151 |
+
), f"max_doc_length ({max_doc_length}) + max_query_length ({max_query_length}) should be less than max_length ({max_length})"
|
152 |
+
|
153 |
+
max_length = max_length - 1
|
154 |
+
|
155 |
+
all_scores = []
|
156 |
+
|
157 |
+
device = next(self.parameters()).device
|
158 |
+
|
159 |
+
batch_iter = range(0, len(pairs), batch_size)
|
160 |
+
if show_progress:
|
161 |
+
from tqdm import trange
|
162 |
+
|
163 |
+
batch_iter = trange(0, len(pairs), batch_size, desc="Computing scores")
|
164 |
+
|
165 |
+
for start_index in batch_iter:
|
166 |
+
mini_batch = pairs[start_index : start_index + batch_size]
|
167 |
+
|
168 |
+
batch_inputs = []
|
169 |
+
for q, d in mini_batch:
|
170 |
+
# TEMP FIX: Truncate long documents
|
171 |
+
if doc_type == 'text':
|
172 |
+
tokens = self._processor.tokenizer(d, truncation=True, max_length=max_doc_length)
|
173 |
+
if len(tokens['input_ids']) >= max_doc_length:
|
174 |
+
d = self._processor.tokenizer.decode(tokens['input_ids'])
|
175 |
+
|
176 |
+
batch_inputs.append(
|
177 |
+
formatting_prompts_func(
|
178 |
+
q, d, query_type=query_type, doc_type=doc_type
|
179 |
+
)
|
180 |
+
)
|
181 |
+
|
182 |
+
batch_images = None
|
183 |
+
if doc_type == 'image':
|
184 |
+
batch_images = load_images([d for (q, d) in mini_batch])
|
185 |
+
elif query_type == 'image':
|
186 |
+
batch_images = load_images([q for (q, d) in mini_batch])
|
187 |
+
|
188 |
+
batch = self._processor(
|
189 |
+
text=batch_inputs,
|
190 |
+
images=batch_images,
|
191 |
+
return_tensors="pt",
|
192 |
+
padding=True,
|
193 |
+
truncation=True,
|
194 |
+
max_length=max_length,
|
195 |
+
)
|
196 |
+
|
197 |
+
# append the reward token to the input_ids and attention_mask
|
198 |
+
batch_size = batch["input_ids"].size(0)
|
199 |
+
batch["input_ids"] = torch.cat(
|
200 |
+
[
|
201 |
+
batch["input_ids"],
|
202 |
+
torch.full((batch_size, 1), self.score_token_id, device=batch["input_ids"].device),
|
203 |
+
],
|
204 |
+
dim=1,
|
205 |
+
)
|
206 |
+
batch["attention_mask"] = torch.cat(
|
207 |
+
[
|
208 |
+
batch["attention_mask"],
|
209 |
+
torch.ones((batch_size, 1), device=batch["attention_mask"].device),
|
210 |
+
],
|
211 |
+
dim=1,
|
212 |
+
)
|
213 |
+
# move the batch to the correct device
|
214 |
+
batch = {k: v.to(device) if isinstance(v, torch.Tensor) else v for k, v in batch.items()}
|
215 |
+
|
216 |
+
scores = self.forward(**batch).view(-1).cpu().float().numpy().tolist()
|
217 |
+
all_scores.extend(scores)
|
218 |
+
|
219 |
+
if len(all_scores) == 1:
|
220 |
+
return all_scores[0]
|
221 |
+
return all_scores
|
preprocessor_config.json
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"do_convert_rgb": true,
|
3 |
+
"do_normalize": true,
|
4 |
+
"do_rescale": true,
|
5 |
+
"do_resize": true,
|
6 |
+
"image_mean": [
|
7 |
+
0.48145466,
|
8 |
+
0.4578275,
|
9 |
+
0.40821073
|
10 |
+
],
|
11 |
+
"image_processor_type": "Qwen2VLImageProcessor",
|
12 |
+
"image_std": [
|
13 |
+
0.26862954,
|
14 |
+
0.26130258,
|
15 |
+
0.27577711
|
16 |
+
],
|
17 |
+
"max_pixels": 12845056,
|
18 |
+
"merge_size": 2,
|
19 |
+
"min_pixels": 3136,
|
20 |
+
"patch_size": 14,
|
21 |
+
"processor_class": "Qwen2VLProcessor",
|
22 |
+
"resample": 3,
|
23 |
+
"rescale_factor": 0.00392156862745098,
|
24 |
+
"size": {
|
25 |
+
"max_pixels": 12845056,
|
26 |
+
"min_pixels": 3136
|
27 |
+
},
|
28 |
+
"temporal_patch_size": 2
|
29 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<|im_start|>",
|
4 |
+
"<|im_end|>",
|
5 |
+
"<|object_ref_start|>",
|
6 |
+
"<|object_ref_end|>",
|
7 |
+
"<|box_start|>",
|
8 |
+
"<|box_end|>",
|
9 |
+
"<|quad_start|>",
|
10 |
+
"<|quad_end|>",
|
11 |
+
"<|vision_start|>",
|
12 |
+
"<|vision_end|>",
|
13 |
+
"<|vision_pad|>",
|
14 |
+
"<|image_pad|>",
|
15 |
+
"<|video_pad|>"
|
16 |
+
],
|
17 |
+
"eos_token": {
|
18 |
+
"content": "<|im_end|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": false,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
},
|
24 |
+
"pad_token": {
|
25 |
+
"content": "<|endoftext|>",
|
26 |
+
"lstrip": false,
|
27 |
+
"normalized": false,
|
28 |
+
"rstrip": false,
|
29 |
+
"single_word": false
|
30 |
+
}
|
31 |
+
}
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:091aa7594dc2fcfbfa06b9e3c22a5f0562ac14f30375c13af7309407a0e67b8a
|
3 |
+
size 11420371
|
tokenizer_config.json
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"151643": {
|
5 |
+
"content": "<|endoftext|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": false,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"151644": {
|
13 |
+
"content": "<|im_start|>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": false,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
},
|
20 |
+
"151645": {
|
21 |
+
"content": "<|im_end|>",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": false,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": true
|
27 |
+
},
|
28 |
+
"151646": {
|
29 |
+
"content": "<|object_ref_start|>",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": false,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false,
|
34 |
+
"special": true
|
35 |
+
},
|
36 |
+
"151647": {
|
37 |
+
"content": "<|object_ref_end|>",
|
38 |
+
"lstrip": false,
|
39 |
+
"normalized": false,
|
40 |
+
"rstrip": false,
|
41 |
+
"single_word": false,
|
42 |
+
"special": true
|
43 |
+
},
|
44 |
+
"151648": {
|
45 |
+
"content": "<|box_start|>",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": false,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false,
|
50 |
+
"special": true
|
51 |
+
},
|
52 |
+
"151649": {
|
53 |
+
"content": "<|box_end|>",
|
54 |
+
"lstrip": false,
|
55 |
+
"normalized": false,
|
56 |
+
"rstrip": false,
|
57 |
+
"single_word": false,
|
58 |
+
"special": true
|
59 |
+
},
|
60 |
+
"151650": {
|
61 |
+
"content": "<|quad_start|>",
|
62 |
+
"lstrip": false,
|
63 |
+
"normalized": false,
|
64 |
+
"rstrip": false,
|
65 |
+
"single_word": false,
|
66 |
+
"special": true
|
67 |
+
},
|
68 |
+
"151651": {
|
69 |
+
"content": "<|quad_end|>",
|
70 |
+
"lstrip": false,
|
71 |
+
"normalized": false,
|
72 |
+
"rstrip": false,
|
73 |
+
"single_word": false,
|
74 |
+
"special": true
|
75 |
+
},
|
76 |
+
"151652": {
|
77 |
+
"content": "<|vision_start|>",
|
78 |
+
"lstrip": false,
|
79 |
+
"normalized": false,
|
80 |
+
"rstrip": false,
|
81 |
+
"single_word": false,
|
82 |
+
"special": true
|
83 |
+
},
|
84 |
+
"151653": {
|
85 |
+
"content": "<|vision_end|>",
|
86 |
+
"lstrip": false,
|
87 |
+
"normalized": false,
|
88 |
+
"rstrip": false,
|
89 |
+
"single_word": false,
|
90 |
+
"special": true
|
91 |
+
},
|
92 |
+
"151654": {
|
93 |
+
"content": "<|vision_pad|>",
|
94 |
+
"lstrip": false,
|
95 |
+
"normalized": false,
|
96 |
+
"rstrip": false,
|
97 |
+
"single_word": false,
|
98 |
+
"special": true
|
99 |
+
},
|
100 |
+
"151655": {
|
101 |
+
"content": "<|image_pad|>",
|
102 |
+
"lstrip": false,
|
103 |
+
"normalized": false,
|
104 |
+
"rstrip": false,
|
105 |
+
"single_word": false,
|
106 |
+
"special": true
|
107 |
+
},
|
108 |
+
"151656": {
|
109 |
+
"content": "<|video_pad|>",
|
110 |
+
"lstrip": false,
|
111 |
+
"normalized": false,
|
112 |
+
"rstrip": false,
|
113 |
+
"single_word": false,
|
114 |
+
"special": true
|
115 |
+
}
|
116 |
+
},
|
117 |
+
"additional_special_tokens": [
|
118 |
+
"<|im_start|>",
|
119 |
+
"<|im_end|>",
|
120 |
+
"<|object_ref_start|>",
|
121 |
+
"<|object_ref_end|>",
|
122 |
+
"<|box_start|>",
|
123 |
+
"<|box_end|>",
|
124 |
+
"<|quad_start|>",
|
125 |
+
"<|quad_end|>",
|
126 |
+
"<|vision_start|>",
|
127 |
+
"<|vision_end|>",
|
128 |
+
"<|vision_pad|>",
|
129 |
+
"<|image_pad|>",
|
130 |
+
"<|video_pad|>"
|
131 |
+
],
|
132 |
+
"bos_token": null,
|
133 |
+
"chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}",
|
134 |
+
"clean_up_tokenization_spaces": false,
|
135 |
+
"eos_token": "<|im_end|>",
|
136 |
+
"errors": "replace",
|
137 |
+
"extra_special_tokens": {},
|
138 |
+
"model_max_length": 32768,
|
139 |
+
"pad_token": "<|endoftext|>",
|
140 |
+
"padding_side": "left",
|
141 |
+
"processor_class": "Qwen2VLProcessor",
|
142 |
+
"split_special_tokens": false,
|
143 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
144 |
+
"unk_token": null
|
145 |
+
}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|