Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,37 @@ widget:
|
|
4 |
- text: "次の出来事の後に起こりうることは何ですか: Xが食パンにジャムを塗る"
|
5 |
---
|
6 |
|
7 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- text: "次の出来事の後に起こりうることは何ですか: Xが食パンにジャムを塗る"
|
5 |
---
|
6 |
|
7 |
+
# COMET-T5 ja
|
8 |
+
|
9 |
+
Finetuned T5 on [ATOMIC ja](https://github.com/nlp-waseda/comet-atomic-ja) using a text-to-text language modeling objective.
|
10 |
+
It was introduced in this paper.
|
11 |
+
|
12 |
+
### How to use
|
13 |
+
|
14 |
+
You can use this model directly with a pipeline for text2text generation.
|
15 |
+
Since the generation relies on some randomness, we set a seed for reproducibility:
|
16 |
+
|
17 |
+
```python
|
18 |
+
>>> from transformers import pipeline, set_seed
|
19 |
+
>>> generator = pipeline('text2text-generation', model='nlp-waseda/comet-t5-base-japanese')
|
20 |
+
>>> set_seed(42)
|
21 |
+
>>> generator("次の出来事の後に起こりうることは何ですか: Xが大学で勉強する", max_length=30, num_return_sequences=5, do_sample=True)
|
22 |
+
|
23 |
+
[{'generated_text': 'Xが成績順で合格する'},
|
24 |
+
{'generated_text': 'Xが学位を取得する'},
|
25 |
+
{'generated_text': 'Xが勉強を始める'},
|
26 |
+
{'generated_text': 'Xが大学に合格する'},
|
27 |
+
{'generated_text': 'Xが試験官から褒められる'}]
|
28 |
+
```
|
29 |
+
|
30 |
+
### BibTeX entry and citation info
|
31 |
+
|
32 |
+
```bibtex
|
33 |
+
@InProceedings{ide_nlp2023_event,
|
34 |
+
author = "井手竜也 and 村田栄樹 and 堀尾海斗 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀",
|
35 |
+
title = "人間と言語モデルに対するプロンプトを用いたゼロからのイベント常識知識グラフ構築",
|
36 |
+
booktitle = "言語処理学会第29回年次大会",
|
37 |
+
year = "2023",
|
38 |
+
url = ""
|
39 |
+
}
|
40 |
+
```
|