Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- RAG
|
8 |
+
---
|
9 |
+
# Kurage
|
10 |
+
|
11 |
+
<p align="center">
|
12 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
|
13 |
+
</p>
|
14 |
+
|
15 |
+
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue).
|
16 |
+
|
17 |
+
This version of the model has been trained to perform RAG in English.
|
18 |
+
|
19 |
+
For models in other languages check [our Kurage collection]. A multilingual model is coming soon!
|
20 |
+
|
21 |
+
# Features / How to use
|
22 |
+
|
23 |
+
First, load the model like so:
|
24 |
+
|
25 |
+
```python
|
26 |
+
from vllm import LLM, SamplingParams
|
27 |
+
|
28 |
+
llm = LLM(model="lightblue/kurage-ja")
|
29 |
+
sampling_params = SamplingParams(temperature=1.0, top_p=0.95, max_tokens=128)
|
30 |
+
```
|
31 |
+
|
32 |
+
* **Multi-chunk RAG**
|
33 |
+
|
34 |
+
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
|
35 |
+
|
36 |
+
<details>
|
37 |
+
<summary>Prompt style</summary>
|
38 |
+
|
39 |
+
### Input:
|
40 |
+
```markdown
|
41 |
+
<<Chunk 1>>
|
42 |
+
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
|
43 |
+
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
|
44 |
+
|
45 |
+
<<Chunk 2>>
|
46 |
+
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
|
47 |
+
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
|
48 |
+
|
49 |
+
<<Chunk 3>>
|
50 |
+
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
|
51 |
+
|
52 |
+
<<Chunk 4>>
|
53 |
+
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
|
54 |
+
|
55 |
+
<<Question>>
|
56 |
+
What is Japan's primary income balance currently?
|
57 |
+
```
|
58 |
+
|
59 |
+
### Output:
|
60 |
+
|
61 |
+
```markdown
|
62 |
+
<<References>>
|
63 |
+
2
|
64 |
+
|
65 |
+
<<Answer>>
|
66 |
+
4.4 trillion yen
|
67 |
+
```
|
68 |
+
|
69 |
+
</details>
|
70 |
+
|
71 |
+
<details>
|
72 |
+
<summary>Python code</summary>
|
73 |
+
|
74 |
+
```python
|
75 |
+
contexts = [
|
76 |
+
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
|
77 |
+
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
|
78 |
+
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
|
79 |
+
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
|
80 |
+
]
|
81 |
+
|
82 |
+
question = "What is Japan's primary income balance currently?"
|
83 |
+
|
84 |
+
def create_rag_prompt(contexts, question):
|
85 |
+
|
86 |
+
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
|
87 |
+
|
88 |
+
str_inputs = f"""{context_str}
|
89 |
+
|
90 |
+
<<Question>>
|
91 |
+
{question}"""
|
92 |
+
|
93 |
+
chat = [
|
94 |
+
{"role": "user", "content": str_inputs},
|
95 |
+
]
|
96 |
+
|
97 |
+
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
|
98 |
+
|
99 |
+
inputs = create_rag_prompt(contexts, question)
|
100 |
+
|
101 |
+
print(inputs)
|
102 |
+
|
103 |
+
outputs = llm.generate([create_rag_prompt(contexts, question)], sampling_params)
|
104 |
+
|
105 |
+
print("###")
|
106 |
+
|
107 |
+
print(outputs[0].outputs[0].text)
|
108 |
+
```
|
109 |
+
|
110 |
+
</details>
|
111 |
+
|
112 |
+
|
113 |
+
* **Single-chunk RAG**
|
114 |
+
|
115 |
+
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
|
116 |
+
|
117 |
+
* **Answer extension**
|
118 |
+
|
119 |
+
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
|
120 |
+
|
121 |
+
* **Multilinguality**
|
122 |
+
|
123 |
+
We have trained our model to be able to answer questions in Japanese based on texts in other languages too!
|
124 |
+
|
125 |
+
* **Q&A generation**
|
126 |
+
|
127 |
+
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
|
128 |
+
|
129 |
+
|
130 |
+
# Training data
|
131 |
+
|
132 |
+
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
|
133 |
+
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
|
134 |
+
|
135 |
+
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
|
136 |
+
|
137 |
+
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
|
138 |
+
|
139 |
+
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
|