oshizo commited on
Commit
cadff82
·
verified ·
1 Parent(s): 48bf0ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -100
README.md CHANGED
@@ -6,37 +6,32 @@ tags:
6
  pipeline_tag: sentence-similarity
7
  library_name: sentence-transformers
8
  ---
9
-
10
  # SentenceTransformer
11
 
12
- This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a None-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
 
 
 
 
 
 
 
 
 
 
13
 
14
  ## Model Details
15
 
16
  ### Model Description
17
  - **Model Type:** Sentence Transformer
18
  <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
19
- - **Maximum Sequence Length:** None tokens
20
- - **Output Dimensionality:** None dimensions
21
  - **Similarity Function:** Cosine Similarity
22
  <!-- - **Training Dataset:** Unknown -->
23
  <!-- - **Language:** Unknown -->
24
  <!-- - **License:** Unknown -->
25
 
26
- ### Model Sources
27
-
28
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
29
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
30
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
31
-
32
- ### Full Model Architecture
33
-
34
- ```
35
- SentenceTransformer(
36
- (0): CLIPQwen2VLWrapper()
37
- )
38
- ```
39
-
40
  ## Usage
41
 
42
  ### Direct Usage (Sentence Transformers)
@@ -44,96 +39,41 @@ SentenceTransformer(
44
  First install the Sentence Transformers library:
45
 
46
  ```bash
47
- pip install -U sentence-transformers
48
  ```
49
 
50
  Then you can load this model and run inference.
51
  ```python
52
  from sentence_transformers import SentenceTransformer
 
 
 
 
 
53
 
54
- # Download from the 🤗 Hub
55
- model = SentenceTransformer("oshizo/japanese-clip-qwen2_vl-exp-0126")
56
- # Run inference
57
  sentences = [
58
- 'The weather is lovely today.',
59
- "It's so sunny outside!",
60
- 'He drove to the stadium.',
61
  ]
62
- embeddings = model.encode(sentences)
63
- print(embeddings.shape)
64
- # [3, 1024]
65
-
66
- # Get the similarity scores for the embeddings
67
- similarities = model.similarity(embeddings, embeddings)
68
- print(similarities.shape)
69
- # [3, 3]
70
- ```
71
-
72
- <!--
73
- ### Direct Usage (Transformers)
74
-
75
- <details><summary>Click to see the direct usage in Transformers</summary>
76
-
77
- </details>
78
- -->
79
-
80
- <!--
81
- ### Downstream Usage (Sentence Transformers)
82
-
83
- You can finetune this model on your own dataset.
84
 
85
- <details><summary>Click to expand</summary>
86
-
87
- </details>
88
- -->
89
-
90
- <!--
91
- ### Out-of-Scope Use
92
-
93
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
94
- -->
95
-
96
- <!--
97
- ## Bias, Risks and Limitations
98
-
99
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
100
- -->
101
-
102
- <!--
103
- ### Recommendations
104
-
105
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
106
- -->
107
-
108
- ## Training Details
109
-
110
- ### Framework Versions
111
- - Python: 3.11.6
112
- - Sentence Transformers: 3.3.1
113
- - Transformers: 4.47.1
114
- - PyTorch: 2.5.1+cu121
115
- - Accelerate: 1.1.1
116
- - Datasets: 2.19.0
117
- - Tokenizers: 0.21.0
118
-
119
- ## Citation
120
-
121
- ### BibTeX
122
-
123
- <!--
124
- ## Glossary
125
-
126
- *Clearly define terms in order to be accessible across audiences.*
127
- -->
128
-
129
- <!--
130
- ## Model Card Authors
131
-
132
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
133
- -->
134
 
135
- <!--
136
- ## Model Card Contact
 
137
 
138
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
139
- -->
 
 
 
 
6
  pipeline_tag: sentence-similarity
7
  library_name: sentence-transformers
8
  ---
 
9
  # SentenceTransformer
10
 
11
+ このモデルは実験的なモデルです。
12
+ 詳細は[ブログ記事](https://note.com/oshizo/n/n473a0124585b)を、関連するソースコードは[リポジトリ](https://github.com/oshizo/japanese-clip-qwen2_vl/)を参照してください。
13
+
14
+
15
+ 前回のバージョン[oshizo/japanese-clip-qwen2_vl-exp-0101](https://huggingface.co/oshizo/japanese-clip-qwen2_vl-exp-0101)との差分
16
+ * テキスト埋め込みモデルを[pkshatech/GLuCoSE-base-ja-v2](https://huggingface.co/pkshatech/GLuCoSE-base-ja-v2)に変更
17
+ * 学習データを公開しました
18
+ * [oshizo/japanese-text-image-retrieval-train](https://huggingface.co/datasets/oshizo/japanese-text-image-retrieval-train)
19
+ * OCRテキストをもとにQwen2.5-14B質問を生成し、質問-ページ画像のペアによる学習を行いました
20
+ * ドキュメント画像の解像度(長いほうの辺)は588px、700px、896pxの三種類で学習を行いました(前回は588pxのみ)
21
+
22
 
23
  ## Model Details
24
 
25
  ### Model Description
26
  - **Model Type:** Sentence Transformer
27
  <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
28
+ - **Maximum Sequence Length:** 512 tokens
29
+ - **Output Dimensionality:** 768 dimensions
30
  - **Similarity Function:** Cosine Similarity
31
  <!-- - **Training Dataset:** Unknown -->
32
  <!-- - **Language:** Unknown -->
33
  <!-- - **License:** Unknown -->
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ## Usage
36
 
37
  ### Direct Usage (Sentence Transformers)
 
39
  First install the Sentence Transformers library:
40
 
41
  ```bash
42
+ pip install -U sentence-transformers fugashi SentencePiece
43
  ```
44
 
45
  Then you can load this model and run inference.
46
  ```python
47
  from sentence_transformers import SentenceTransformer
48
+ model = SentenceTransformer("oshizo/japanese-clip-qwen2_vl-exp-0126", trust_remote_code=True)
49
+
50
+ import io
51
+ import requests
52
+ from PIL import Image
53
 
 
 
 
54
  sentences = [
55
+ 'モノクロの男性の肖像写真。軍服を着て石の階段に座っている。',
56
+ "庭で茶色の犬がこちらを向いて座っている。"
 
57
  ]
58
+ text_embeddings = model.encode(sentences)
59
+ text_embeddings.shape
60
+ # (2, 1024)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
+ image_urls = [
63
+ 'https://upload.wikimedia.org/wikipedia/commons/7/73/Shigenobu_Okuma_5.jpg',
64
+ 'https://upload.wikimedia.org/wikipedia/commons/7/78/Akita_inu.jpeg'
65
+ ]
66
+ images = [
67
+ Image.open(io.BytesIO(requests.get(image_urls[0]).content)).resize((150, 240)),
68
+ Image.open(io.BytesIO(requests.get(image_urls[1]).content)).resize((240, 150))
69
+ ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
+ image_embeddings = model.encode(images)
72
+ image_embeddings.shape
73
+ # (2, 1024)
74
 
75
+ similarities = model.similarity(text_embeddings, image_embeddings)
76
+ similarities
77
+ # tensor([[ 2.6399e-01, 8.1531e-02],
78
+ # [-2.4970e-04, 3.1410e-01]])
79
+ ```