Update README.md
Browse files
README.md
CHANGED
@@ -13,9 +13,68 @@ base_model:
|
|
13 |
ํยท์ ์๋ฃ ์ฉ์ด ์ฌ์ ์ธ KOSTOM์ ์ฌ์ฉํด ํ๊ตญ์ด ์ฉ์ด์ ์์ด ์ฉ์ด๋ฅผ ์ ๋ ฌํ์ต๋๋ค.
|
14 |
์ฐธ๊ณ : [SapBERT](https://aclanthology.org/2021.naacl-main.334.pdf), [Original Code](https://github.com/cambridgeltl/sapbert)
|
15 |
|
16 |
-
##
|
17 |
**SapBERT**๋ ์๋ง์ ์๋ฃ ๋์์ด๋ฅผ ๋์ผํ ์๋ฏธ๋ก ์ฒ๋ฆฌํ๊ธฐ ์ํ ์ฌ์ ํ์ต ๋ฐฉ๋ฒ๋ก ์
๋๋ค.
|
18 |
-
**SapBERT-KO-EN**๋
|
19 |
|
|
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
ํยท์ ์๋ฃ ์ฉ์ด ์ฌ์ ์ธ KOSTOM์ ์ฌ์ฉํด ํ๊ตญ์ด ์ฉ์ด์ ์์ด ์ฉ์ด๋ฅผ ์ ๋ ฌํ์ต๋๋ค.
|
14 |
์ฐธ๊ณ : [SapBERT](https://aclanthology.org/2021.naacl-main.334.pdf), [Original Code](https://github.com/cambridgeltl/sapbert)
|
15 |
|
16 |
+
## 2. SapBERT-KO-EN
|
17 |
**SapBERT**๋ ์๋ง์ ์๋ฃ ๋์์ด๋ฅผ ๋์ผํ ์๋ฏธ๋ก ์ฒ๋ฆฌํ๊ธฐ ์ํ ์ฌ์ ํ์ต ๋ฐฉ๋ฒ๋ก ์
๋๋ค.
|
18 |
+
**SapBERT-KO-EN**๋ **ํยท์ ํผ์ฉ์ฒด์ ์๋ฃ ๊ธฐ๋ก**์ ์ฒ๋ฆฌํ๊ธฐ ์ํด ํยท์ ์๋ฃ ์ฉ์ด๋ฅผ ์ ๋ ฌํ์ต๋๋ค.
|
19 |
|
20 |
+
โป ์์ธํ ์ค๋ช
: [Github](https://github.com/snumin44/SapBERT-KO-EN)
|
21 |
|
22 |
+
## 3. Training
|
23 |
+
|
24 |
+
|
25 |
+
๋ชจ๋ธ ํ์ต์ ํ์ฉํ ๋ฒ ์ด์ค ๋ชจ๋ธ ๋ฐ ํ์ดํผ ํ๋ผ๋ฏธํฐ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
|
26 |
+
|
27 |
+
- Model : klue/bert-base
|
28 |
+
- Epochs : 1
|
29 |
+
- Batch Size : 64
|
30 |
+
- Max Length : 64
|
31 |
+
- Dropout : 0.1
|
32 |
+
- Pooler : 'cls'
|
33 |
+
- Eval Step : 100
|
34 |
+
- Threshold : 0.8
|
35 |
+
- Scale Positive Sample : 1
|
36 |
+
- Scale Negative Sample : 60
|
37 |
+
|
38 |
+
โป ์์ด ์ฉ์ด์ ๊ฒฝ์ฐ ๋๋ถ๋ถ ์ํ๋ฒณ ๋จ์๋ก ์ฒ๋ฆฌํฉ๋๋ค.
|
39 |
+
|
40 |
+
```python
|
41 |
+
import numpy as np
|
42 |
+
from transformers import AutoModel, AutoTokenizer
|
43 |
+
|
44 |
+
model_path = 'snumin44/sap-bert-ko-en'
|
45 |
+
model = AutoModel.from_pretrained(model_path)
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
47 |
+
|
48 |
+
query = '๊ฐ๊ฒฝํ'
|
49 |
+
|
50 |
+
targets = [
|
51 |
+
'liver cirrhosis',
|
52 |
+
'๊ฐ๊ฒฝ๋ณ',
|
53 |
+
'liver cancer',
|
54 |
+
'๊ฐ์',
|
55 |
+
'brain tumor',
|
56 |
+
'๋์ข
์'
|
57 |
+
]
|
58 |
+
|
59 |
+
query_feature = tokenizer(query, return_tensors='pt')
|
60 |
+
query_outputs = model(**query_feature, return_dict=True)
|
61 |
+
query_embeddings = query_outputs.pooler_output.detach().numpy().squeeze()
|
62 |
+
|
63 |
+
def cos_sim(A, B):
|
64 |
+
return np.dot(A, B) / (np.linalg.norm(A) * np.linalg.norm(B))
|
65 |
+
|
66 |
+
for idx, target in enumerate(targets):
|
67 |
+
target_feature = tokenizer(target, return_tensors='pt')
|
68 |
+
target_outputs = model(**target_feature, return_dict=True)
|
69 |
+
target_embeddings = target_outputs.pooler_output.detach().numpy().squeeze()
|
70 |
+
similarity = cos_sim(query_embeddings, target_embeddings)
|
71 |
+
print(f"Similarity between query and target {idx}: {similarity:.4f}")
|
72 |
+
```
|
73 |
+
```
|
74 |
+
Similarity between query and target 0: 0.7145
|
75 |
+
Similarity between query and target 1: 0.7186
|
76 |
+
Similarity between query and target 2: 0.6183
|
77 |
+
Similarity between query and target 3: 0.6972
|
78 |
+
Similarity between query and target 4: 0.3929
|
79 |
+
Similarity between query and target 5: 0.4260
|
80 |
+
```
|