Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ llm = LLM(model="lightblue/kurage-ja")
|
|
29 |
sampling_params = SamplingParams(temperature=1.0, top_p=0.95, max_tokens=128)
|
30 |
```
|
31 |
|
32 |
-
|
33 |
|
34 |
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
|
35 |
|
@@ -108,7 +108,7 @@ print(outputs[0].outputs[0].text)
|
|
108 |
</details>
|
109 |
|
110 |
|
111 |
-
|
112 |
|
113 |
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
|
114 |
|
@@ -192,7 +192,7 @@ print("\n\n".join([o.outputs[0].text for o in outputs]))
|
|
192 |
|
193 |
</details>
|
194 |
|
195 |
-
|
196 |
|
197 |
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
|
198 |
|
@@ -256,7 +256,7 @@ print("\n\n".join([o.outputs[0].text for o in outputs]))
|
|
256 |
|
257 |
</details>
|
258 |
|
259 |
-
|
260 |
|
261 |
We have trained our model to be able to answer questions in Japanese based on texts in other languages too!
|
262 |
|
@@ -333,7 +333,7 @@ print(outputs[0].outputs[0].text)
|
|
333 |
|
334 |
</details>
|
335 |
|
336 |
-
|
337 |
|
338 |
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
|
339 |
|
@@ -392,6 +392,7 @@ print("\n\n".join([o.outputs[0].text for o in outputs]))
|
|
392 |
|
393 |
</details>
|
394 |
|
|
|
395 |
|
396 |
# Training data
|
397 |
|
|
|
29 |
sampling_params = SamplingParams(temperature=1.0, top_p=0.95, max_tokens=128)
|
30 |
```
|
31 |
|
32 |
+
## Multi-chunk RAG**
|
33 |
|
34 |
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
|
35 |
|
|
|
108 |
</details>
|
109 |
|
110 |
|
111 |
+
## Single-chunk RAG**
|
112 |
|
113 |
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
|
114 |
|
|
|
192 |
|
193 |
</details>
|
194 |
|
195 |
+
## Answer extension**
|
196 |
|
197 |
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
|
198 |
|
|
|
256 |
|
257 |
</details>
|
258 |
|
259 |
+
## Multilinguality**
|
260 |
|
261 |
We have trained our model to be able to answer questions in Japanese based on texts in other languages too!
|
262 |
|
|
|
333 |
|
334 |
</details>
|
335 |
|
336 |
+
## Q&A generation**
|
337 |
|
338 |
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
|
339 |
|
|
|
392 |
|
393 |
</details>
|
394 |
|
395 |
+
<br/>
|
396 |
|
397 |
# Training data
|
398 |
|