pierluigic commited on
Commit
732b48b
1 Parent(s): 9ce9005

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # Cross-Encoder for Word Sense Relationships Classification
5
+
6
+ This model was trained on word sense relationships extracted by WordNet for the [semantic change type classification](https://github.com/ChangeIsKey/change-type-classification).
7
+
8
+ The model can be used to detect which kind of relatioships (among homonymy, antonymy, hypernonym, hyponymy, and co-hypnomy) intercur between word senses: Given a pair of word sense definitions, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order.
9
+
10
+ The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
11
+
12
+
13
+ ## Usage with Transformers
14
+
15
+ ```python
16
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
17
+ import torch
18
+
19
+ model = AutoModelForSequenceClassification.from_pretrained('model_name')
20
+ tokenizer = AutoTokenizer.from_pretrained('model_name')
21
+
22
+ features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
23
+
24
+ model.eval()
25
+ with torch.no_grad():
26
+ scores = model(**features).logits
27
+ print(scores)
28
+ ```
29
+
30
+
31
+ ## Usage with SentenceTransformers
32
+
33
+ The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
34
+ ```python
35
+ from sentence_transformers import CrossEncoder
36
+ model = CrossEncoder('model_name', max_length=512)
37
+ labels = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
38
+ ```
39
+
40
+
41
+ ## Performance
42
+ In the following table, we provide various pre-trained Cross-Encoders together with their performance on the
43
+
44
+ ![alt text](https://github.com/ChangeIsKey/change-type-classification/blob/main/lsc_ctd_benchmark_snippet_table.png "t")