ptrdvn commited on
Commit
5038a3a
·
verified ·
1 Parent(s): b33ca64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +218 -10
README.md CHANGED
@@ -100,9 +100,7 @@ inputs = create_rag_prompt(contexts, question)
100
 
101
  print(inputs)
102
 
103
- outputs = llm.generate([create_rag_prompt(contexts, question)], sampling_params)
104
-
105
- print("###")
106
 
107
  print(outputs[0].outputs[0].text)
108
  ```
@@ -119,7 +117,7 @@ This model can also take a single context and a question as input, and it will d
119
 
120
 
121
 
122
- ### Irrelevant context Input:
123
  ```markdown
124
  <<Chunk 1>>
125
  Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
@@ -129,7 +127,7 @@ She mentioned that if the BOJ's economic and price outlook materializes in the f
129
  What is Japan's primary income balance currently?
130
  ```
131
 
132
- ### Irrelevant context Output:
133
 
134
  ```markdown
135
  <<References>>
@@ -137,7 +135,7 @@ None
137
  ```
138
 
139
 
140
- ### Relevant context Input:
141
  ```markdown
142
  <<Chunk 1>>
143
  Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
@@ -147,7 +145,7 @@ However, the surplus continues to be driven by the primary income balance, which
147
  What is Japan's primary income balance currently?
148
  ```
149
 
150
- ### Relevant context Output:
151
 
152
  ```markdown
153
  <<References>>
@@ -163,11 +161,33 @@ What is Japan's primary income balance currently?
163
  <summary>Python code</summary>
164
 
165
  ```python
166
- outputs = llm.generate([create_rag_prompt(x, question) for x in contexts], sampling_params)
 
 
 
 
 
 
 
167
 
168
- print("###")
 
 
169
 
170
- print([o.outputs[0].text for o in outputs])
 
 
 
 
 
 
 
 
 
 
 
 
 
171
  ```
172
 
173
  </details>
@@ -176,14 +196,202 @@ print([o.outputs[0].text for o in outputs])
176
 
177
  By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
178
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
179
  * **Multilinguality**
180
 
181
  We have trained our model to be able to answer questions in Japanese based on texts in other languages too!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
182
 
183
  * **Q&A generation**
184
 
185
  This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
186
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
187
 
188
  # Training data
189
 
 
100
 
101
  print(inputs)
102
 
103
+ outputs = llm.generate([inputs], sampling_params)
 
 
104
 
105
  print(outputs[0].outputs[0].text)
106
  ```
 
117
 
118
 
119
 
120
+ ### Irrelevant context input:
121
  ```markdown
122
  <<Chunk 1>>
123
  Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
 
127
  What is Japan's primary income balance currently?
128
  ```
129
 
130
+ ### Irrelevant context output:
131
 
132
  ```markdown
133
  <<References>>
 
135
  ```
136
 
137
 
138
+ ### Relevant context input:
139
  ```markdown
140
  <<Chunk 1>>
141
  Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
 
145
  What is Japan's primary income balance currently?
146
  ```
147
 
148
+ ### Relevant context output:
149
 
150
  ```markdown
151
  <<References>>
 
161
  <summary>Python code</summary>
162
 
163
  ```python
164
+ contexts = [
165
+ "Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
166
+ "Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
167
+ "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
168
+ "In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
169
+ ]
170
+
171
+ question = "What is Japan's primary income balance currently?"
172
 
173
+ def create_rag_prompt(contexts, question):
174
+
175
+ context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
176
 
177
+ str_inputs = f"""{context_str}
178
+
179
+ <<Question>>
180
+ {question}"""
181
+
182
+ chat = [
183
+ {"role": "user", "content": str_inputs},
184
+ ]
185
+
186
+ return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
187
+
188
+ outputs = llm.generate([create_rag_prompt(x, question) for x in contexts], sampling_params)
189
+
190
+ print("\n\n".join([o.outputs[0].text for o in outputs]))
191
  ```
192
 
193
  </details>
 
196
 
197
  By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
198
 
199
+ <details>
200
+ <summary>Prompt style</summary>
201
+
202
+ ### Input:
203
+ ```markdown
204
+ <<Chunk 1>>
205
+ Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
206
+ However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
207
+
208
+ <<Question>>
209
+ What is Japan's primary income balance currently? <<Long>>
210
+ ```
211
+
212
+ ### Relevant context output:
213
+
214
+ ```markdown
215
+ <<References>>
216
+ 1
217
+
218
+ <<Answer>>
219
+ 4.4 trillion yen
220
+ ```
221
+
222
+ </details>
223
+
224
+ <details>
225
+ <summary>Python code</summary>
226
+
227
+ ```python
228
+ contexts = [
229
+ "Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
230
+ "Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
231
+ "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
232
+ "In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
233
+ ]
234
+
235
+ question = "What is Japan's primary income balance currently? <<Long>>"
236
+
237
+ def create_rag_prompt(contexts, question):
238
+
239
+ context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
240
+
241
+ str_inputs = f"""{context_str}
242
+
243
+ <<Question>>
244
+ {question}"""
245
+
246
+ chat = [
247
+ {"role": "user", "content": str_inputs},
248
+ ]
249
+
250
+ return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
251
+
252
+ outputs = llm.generate([create_rag_prompt(x, question) for x in contexts], sampling_params)
253
+
254
+ print("\n\n".join([o.outputs[0].text for o in outputs]))
255
+ ```
256
+
257
+ </details>
258
+
259
  * **Multilinguality**
260
 
261
  We have trained our model to be able to answer questions in Japanese based on texts in other languages too!
262
+
263
+ <details>
264
+ <summary>Prompt style</summary>
265
+
266
+ ### Input:
267
+ ```markdown
268
+ <<Chunk 1>>
269
+ Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
270
+ She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
271
+
272
+ <<Chunk 2>>
273
+ 7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
274
+
275
+ <<Chunk 3>>
276
+ รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
277
+
278
+ <<Chunk 4>>
279
+ In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
280
+
281
+ <<Question>>
282
+ What is Japan's primary income balance currently?
283
+ ```
284
+
285
+ ### Output:
286
+
287
+ ```markdown
288
+ <<References>>
289
+ 2
290
+
291
+ <<Answer>>
292
+ 4.4 trillion yen
293
+ ```
294
+
295
+ </details>
296
+
297
+ <details>
298
+ <summary>Python code</summary>
299
+
300
+ ```python
301
+ contexts = [
302
+ "Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
303
+ "7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。",
304
+ "รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี",
305
+ "In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
306
+ ]
307
+
308
+ question = "What is Japan's primary income balance currently?"
309
+
310
+ def create_rag_prompt(contexts, question):
311
+
312
+ context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
313
+
314
+ str_inputs = f"""{context_str}
315
+
316
+ <<Question>>
317
+ {question}"""
318
+
319
+ chat = [
320
+ {"role": "user", "content": str_inputs},
321
+ ]
322
+
323
+ return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
324
+
325
+ inputs = create_rag_prompt(contexts, question)
326
+
327
+ print(inputs)
328
+
329
+ outputs = llm.generate([inputs], sampling_params)
330
+
331
+ print(outputs[0].outputs[0].text)
332
+ ```
333
+
334
+ </details>
335
 
336
  * **Q&A generation**
337
 
338
  This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
339
 
340
+ <details>
341
+ <summary>Prompt style</summary>
342
+
343
+ ### Input:
344
+ ```markdown
345
+ <<Q&A Generation Context>>
346
+ Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
347
+ However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
348
+ ```
349
+
350
+ ### Output:
351
+
352
+ ```markdown
353
+ <<Question>>
354
+ What is Japan's current account surplus in July?
355
+
356
+ <<Answer>>
357
+ 3.2 trillion yen
358
+ ```
359
+
360
+ </details>
361
+
362
+ <details>
363
+ <summary>Python code</summary>
364
+
365
+ ```python
366
+ contexts = [
367
+ "Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
368
+ "7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。",
369
+ "รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี",
370
+ "In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
371
+ ]
372
+
373
+ question = "What is Japan's primary income balance currently?"
374
+
375
+ def create_qagen_prompt(context):
376
+
377
+ str_inputs = f"""<<Q&A Generation Context>>
378
+ {context}"""
379
+
380
+ chat = [
381
+ {"role": "user", "content": str_inputs},
382
+ ]
383
+
384
+ return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
385
+
386
+ inputs = [create_qagen_prompt(x) for x in contexts]
387
+
388
+ outputs = llm.generate(inputs, sampling_params)
389
+
390
+ print("\n\n".join([o.outputs[0].text for o in outputs]))
391
+ ```
392
+
393
+ </details>
394
+
395
 
396
  # Training data
397