id
stringlengths 36
36
| filename
stringclasses 310
values | label
stringclasses 13
values | text
stringlengths 0
25.7k
⌀ | caption_text
stringlengths 2
1.49k
⌀ | image
imagewidth (px) 2
1.11k
⌀ | width
int64 0
1.11k
| height
int64 0
1.42k
| dpi
int64 72
72
| mimetype
stringclasses 1
value | page_no
int64 2
305
| mime_type
stringclasses 1
value | version
stringclasses 1
value | tags
sequencelengths 0
0
| properties
null | error
stringclasses 1
value | raw_response
stringlengths 255
1.07M
| synced_at
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
609b5b6c-819c-4f70-8f64-4336d129bf7c | 2306.02254v2.pdf | picture | null | Figure 2: The 5-shot performance of Polyglot-Ko models on each task demonstrates a clear trend that as the compute increases, the performance improves. | 426 | 325 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/pictures/4", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 6, "bbox": {"l": 73.1624755859375, "t": 768.1527099609375, "r": 285.9635925292969, "b": 605.8900146484375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 151]}], "captions": [{"cref": "#/texts/70"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 426.0, "height": 325.0}, "uri": null}, "annotations": []} | null |
|
1e60cab5-7e35-4d3c-9e1a-a813f18a89e0 | 2306.02254v2.pdf | text | in the COPA task. It showcases the model's exceptional performance and reinforces its effectiveness in understanding and reasoning tasks. | null | 442 | 77 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/71", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 70.13262176513672, "t": 532.3021240234375, "r": 290.94384765625, "b": 493.86590576171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 137]}], "orig": "in the COPA task. It showcases the model's exceptional performance and reinforces its effectiveness in understanding and reasoning tasks.", "text": "in the COPA task. It showcases the model's exceptional performance and reinforces its effectiveness in understanding and reasoning tasks."} | null |
|
5bae8ed2-e078-4c4c-973c-47492a0a61d2 | 2306.02254v2.pdf | section_header | 4.2 HellaSwag | null | 150 | 22 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/72", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 6, "bbox": {"l": 69.97775268554688, "t": 480.2784118652344, "r": 145.1133270263672, "b": 469.4962158203125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 13]}], "orig": "4.2 HellaSwag", "text": "4.2 HellaSwag", "level": 1} | null |
|
a50c2743-a2ae-4293-ae6a-1f95c79ae675 | 2306.02254v2.pdf | text | The right portion of Table 3 presents the performance of different language models on the HellaSwag task for the Korean language. The table includes the model name, the number of parameters in each model, and their respective performance at different number of shots on the HellaSwag task using the KOBEST dataset. The evaluation metric used is the F1 score. According to the table, our 12.8B model achieves the highest scores for the 5shot, 10-shot, and 50-shot scenarios, with F1 scores of 0.6306, 0.6098, and 0.6118, respectively. However, in the 0-shot scenario, kakaobrain's kogpt model achieves the highest score of 0.5590. Overall, our 12.8B model demonstrates the best performance among the listed models on the HellaSwag task for the Korean language. These results highlight the effectiveness of our 12.8B model in understanding and generating coherent responses within the context of the HellaSwag task. Its superior performance in the majority of the shot scenarios showcases its ability to handle complex language understanding tasks and generate contextually appropriate responses. | null | 442 | 620 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/73", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 69.86763763427734, "t": 461.73529052734375, "r": 290.9498596191406, "b": 151.93499755859375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1094]}], "orig": "The right portion of Table 3 presents the performance of different language models on the HellaSwag task for the Korean language. The table includes the model name, the number of parameters in each model, and their respective performance at different number of shots on the HellaSwag task using the KOBEST dataset. The evaluation metric used is the F1 score. According to the table, our 12.8B model achieves the highest scores for the 5shot, 10-shot, and 50-shot scenarios, with F1 scores of 0.6306, 0.6098, and 0.6118, respectively. However, in the 0-shot scenario, kakaobrain's kogpt model achieves the highest score of 0.5590. Overall, our 12.8B model demonstrates the best performance among the listed models on the HellaSwag task for the Korean language. These results highlight the effectiveness of our 12.8B model in understanding and generating coherent responses within the context of the HellaSwag task. Its superior performance in the majority of the shot scenarios showcases its ability to handle complex language understanding tasks and generate contextually appropriate responses.", "text": "The right portion of Table 3 presents the performance of different language models on the HellaSwag task for the Korean language. The table includes the model name, the number of parameters in each model, and their respective performance at different number of shots on the HellaSwag task using the KOBEST dataset. The evaluation metric used is the F1 score. According to the table, our 12.8B model achieves the highest scores for the 5shot, 10-shot, and 50-shot scenarios, with F1 scores of 0.6306, 0.6098, and 0.6118, respectively. However, in the 0-shot scenario, kakaobrain's kogpt model achieves the highest score of 0.5590. Overall, our 12.8B model demonstrates the best performance among the listed models on the HellaSwag task for the Korean language. These results highlight the effectiveness of our 12.8B model in understanding and generating coherent responses within the context of the HellaSwag task. Its superior performance in the majority of the shot scenarios showcases its ability to handle complex language understanding tasks and generate contextually appropriate responses."} | null |
|
6216690e-aeba-4c48-96a2-f17af8ef3cac | 2306.02254v2.pdf | section_header | 4.3 BoolQ | null | 110 | 22 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/74", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 6, "bbox": {"l": 69.96112060546875, "t": 139.1885986328125, "r": 125.116943359375, "b": 128.0889892578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 9]}], "orig": "4.3 BoolQ", "text": "4.3 BoolQ", "level": 1} | null |
|
40b661f8-74a3-41f8-9092-0dec4cc33cc8 | 2306.02254v2.pdf | text | In the BoolQ task, which focuses on answering boolean questions, a thorough analysis of the results demonstrates that our models outperformed the others. Specifically, our largest model, Polyglot- | null | 442 | 105 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/75", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 69.999755859375, "t": 119.79608154296875, "r": 290.9491882324219, "b": 67.72198486328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 196]}], "orig": "In the BoolQ task, which focuses on answering boolean questions, a thorough analysis of the results demonstrates that our models outperformed the others. Specifically, our largest model, Polyglot-", "text": "In the BoolQ task, which focuses on answering boolean questions, a thorough analysis of the results demonstrates that our models outperformed the others. Specifically, our largest model, Polyglot-"} | null |
|
629f45a7-b3ea-4278-9bdc-5cb8ba30e649 | 2306.02254v2.pdf | text | ko-12.8B, achieved the highest F1 scores. This indicates that the model possesses exceptional accuracy in predicting answers to boolean questions. Conversely, SKT's ko-gpt-trinity model exhibited relatively lower F1 scores across all prompt numbers, while Facebook's XGLM model consistently underperformed. As a result, our models demonstrate strong performance in the BoolQ task. | null | 442 | 212 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/76", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 305.3446044921875, "t": 768.2764892578125, "r": 526.3203735351562, "b": 662.5926513671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 380]}], "orig": "ko-12.8B, achieved the highest F1 scores. This indicates that the model possesses exceptional accuracy in predicting answers to boolean questions. Conversely, SKT's ko-gpt-trinity model exhibited relatively lower F1 scores across all prompt numbers, while Facebook's XGLM model consistently underperformed. As a result, our models demonstrate strong performance in the BoolQ task.", "text": "ko-12.8B, achieved the highest F1 scores. This indicates that the model possesses exceptional accuracy in predicting answers to boolean questions. Conversely, SKT's ko-gpt-trinity model exhibited relatively lower F1 scores across all prompt numbers, while Facebook's XGLM model consistently underperformed. As a result, our models demonstrate strong performance in the BoolQ task."} | null |
|
eb39f939-7db4-4efc-a979-0d3f99249d93 | 2306.02254v2.pdf | section_header | 4.4 SentiNeg | null | 134 | 22 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/77", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 6, "bbox": {"l": 305.2775573730469, "t": 649.108154296875, "r": 372.6119689941406, "b": 638.163818359375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 12]}], "orig": "4.4 SentiNeg", "text": "4.4 SentiNeg", "level": 1} | null |
|
b707d9ac-6497-4960-9ed6-c88249a1e08d | 2306.02254v2.pdf | text | The SentiNeg task results are also presented in the left portion of Table 4. This task focuses on sentiment analysis for negation detection. A comprehensive analysis of the results reveals that our models exhibited superior performance. In particular, our 12.8B model achieved the highest F1 scores, indicating its excellent ability to accurately detect negated sentiment. SKT's ko-gpt-trinity model also showed consistent improvement, while kakaobrain's KoGPT model demonstrated varied performance with slight increases or decreases in F1 scores depending on the prompt number. However, Facebook's XGLM model consistently displayed lower performance in the SentiNeg task. | null | 443 | 375 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/78", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 305.1351623535156, "t": 630.2433471679688, "r": 526.2251586914062, "b": 442.7027893066406, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 672]}], "orig": "The SentiNeg task results are also presented in the left portion of Table 4. This task focuses on sentiment analysis for negation detection. A comprehensive analysis of the results reveals that our models exhibited superior performance. In particular, our 12.8B model achieved the highest F1 scores, indicating its excellent ability to accurately detect negated sentiment. SKT's ko-gpt-trinity model also showed consistent improvement, while kakaobrain's KoGPT model demonstrated varied performance with slight increases or decreases in F1 scores depending on the prompt number. However, Facebook's XGLM model consistently displayed lower performance in the SentiNeg task.", "text": "The SentiNeg task results are also presented in the left portion of Table 4. This task focuses on sentiment analysis for negation detection. A comprehensive analysis of the results reveals that our models exhibited superior performance. In particular, our 12.8B model achieved the highest F1 scores, indicating its excellent ability to accurately detect negated sentiment. SKT's ko-gpt-trinity model also showed consistent improvement, while kakaobrain's KoGPT model demonstrated varied performance with slight increases or decreases in F1 scores depending on the prompt number. However, Facebook's XGLM model consistently displayed lower performance in the SentiNeg task."} | null |
|
eab57ccf-e386-4c24-8945-33b048dd5805 | 2306.02254v2.pdf | text | Furthermore, during our investigation, we discovered that the default prompt used for this task introduced significant instability, particularly in zero-shot performance. To address this issue, we devised a modified prompt, which led to substantial improvements in the model's performance. For detailed results, please refer to the Appendix. Overall, our models demonstrate strong performance in the SentiNeg task, showcasing their effectiveness in sentiment analysis and negation detection. | null | 443 | 266 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/79", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 305.11712646484375, "t": 440.5567932128906, "r": 526.2216796875, "b": 307.29083251953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 491]}], "orig": "Furthermore, during our investigation, we discovered that the default prompt used for this task introduced significant instability, particularly in zero-shot performance. To address this issue, we devised a modified prompt, which led to substantial improvements in the model's performance. For detailed results, please refer to the Appendix. Overall, our models demonstrate strong performance in the SentiNeg task, showcasing their effectiveness in sentiment analysis and negation detection.", "text": "Furthermore, during our investigation, we discovered that the default prompt used for this task introduced significant instability, particularly in zero-shot performance. To address this issue, we devised a modified prompt, which led to substantial improvements in the model's performance. For detailed results, please refer to the Appendix. Overall, our models demonstrate strong performance in the SentiNeg task, showcasing their effectiveness in sentiment analysis and negation detection."} | null |
|
9cf8b296-3b4d-4b08-92f7-239a57df510a | 2306.02254v2.pdf | text | When analyzing the performance of PolyglotKo models only in the 5-shot evaluation across all four tasks, it becomes clear that the performance improves with increasing compute(Kaplan et al., 2020) as depicted in Figure 2. | null | 442 | 132 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/80", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 305.42218017578125, "t": 304.54052734375, "r": 526.2216186523438, "b": 238.3758544921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 221]}], "orig": "When analyzing the performance of PolyglotKo models only in the 5-shot evaluation across all four tasks, it becomes clear that the performance improves with increasing compute(Kaplan et al., 2020) as depicted in Figure 2.", "text": "When analyzing the performance of PolyglotKo models only in the 5-shot evaluation across all four tasks, it becomes clear that the performance improves with increasing compute(Kaplan et al., 2020) as depicted in Figure 2."} | null |
|
4f5812bb-99f8-4f12-b91d-d3c94bc7da8b | 2306.02254v2.pdf | section_header | 5 Limitations and Disclaimers | null | 326 | 23 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/81", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 6, "bbox": {"l": 305.5222473144531, "t": 225.09967041015625, "r": 468.2066955566406, "b": 213.27536010742188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 29]}], "orig": "5 Limitations and Disclaimers", "text": "5 Limitations and Disclaimers", "level": 1} | null |
|
0c00d3ee-7e3f-4b61-93a9-a0716a257e74 | 2306.02254v2.pdf | text | Polyglot-Ko has been primarily trained to optimize next token prediction, which makes it suitable for a wide range of tasks. However, it is crucial to acknowledge the potential for unexpected outcomes. While Polyglot-Ko strives to generate the most statistically likely response, it may not always provide the most accurate or factual answer. It is important to exercise caution when relying on the model's outputs. | null | 443 | 238 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/82", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 305.0841064453125, "t": 201.47222900390625, "r": 526.3133544921875, "b": 82.5562744140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 415]}], "orig": "Polyglot-Ko has been primarily trained to optimize next token prediction, which makes it suitable for a wide range of tasks. However, it is crucial to acknowledge the potential for unexpected outcomes. While Polyglot-Ko strives to generate the most statistically likely response, it may not always provide the most accurate or factual answer. It is important to exercise caution when relying on the model's outputs.", "text": "Polyglot-Ko has been primarily trained to optimize next token prediction, which makes it suitable for a wide range of tasks. However, it is crucial to acknowledge the potential for unexpected outcomes. While Polyglot-Ko strives to generate the most statistically likely response, it may not always provide the most accurate or factual answer. It is important to exercise caution when relying on the model's outputs."} | null |
|
b0291f02-f24a-4518-9a16-ae7e541bfd7f | 2306.02254v2.pdf | text | Additionally, it is worth noting that Polyglot-Ko | null | 416 | 23 | 72 | image/png | 7 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/83", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 6, "bbox": {"l": 316.3492431640625, "t": 78.95904541015625, "r": 524.412353515625, "b": 67.72186279296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 49]}], "orig": "Additionally, it is worth noting that Polyglot-Ko", "text": "Additionally, it is worth noting that Polyglot-Ko"} | null |
|
a7efe962-d463-41a5-81bd-bd30e8fa5b3b | 2306.02254v2.pdf | text | may generate content that is socially unacceptable or offensive. To mitigate this risk, we strongly recommend implementing a human curator or employing other filtering mechanisms to censor sensitive or inappropriate content. Regarding the hardware used for training, it is important to mention that the models were trained on a hardware setup with relatively low TFLOPS compared to the upcoming versions of Polyglot that we are currently preparing. This resulted in longer training time and resources to complete the training process successfully. | null | 443 | 293 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/84", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 69.71382141113281, "t": 767.9796752929688, "r": 291.03973388671875, "b": 621.3518676757812, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 547]}], "orig": "may generate content that is socially unacceptable or offensive. To mitigate this risk, we strongly recommend implementing a human curator or employing other filtering mechanisms to censor sensitive or inappropriate content. Regarding the hardware used for training, it is important to mention that the models were trained on a hardware setup with relatively low TFLOPS compared to the upcoming versions of Polyglot that we are currently preparing. This resulted in longer training time and resources to complete the training process successfully.", "text": "may generate content that is socially unacceptable or offensive. To mitigate this risk, we strongly recommend implementing a human curator or employing other filtering mechanisms to censor sensitive or inappropriate content. Regarding the hardware used for training, it is important to mention that the models were trained on a hardware setup with relatively low TFLOPS compared to the upcoming versions of Polyglot that we are currently preparing. This resulted in longer training time and resources to complete the training process successfully."} | null |
|
4a642522-b8b1-4f47-ad6c-02f1edb05110 | 2306.02254v2.pdf | text | Furthermore, we discovered mistakes in the data preprocessing phase during our experiments. Specifically, the data was incorrectly stripped of newlines, leading to a loss of document structure. This likely resulted in some information loss during the model training process. It is important to address this issue in future iterations to ensure the preservation of document structure and minimize information loss. These considerations highlight the importance of continuously improving the training process and addressing any limitations or errors that arise. By doing so, we can enhance the performance and reliability of Polyglot-Ko for a wide range of tasks and applications. | null | 442 | 375 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/85", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 69.95396423339844, "t": 618.5091552734375, "r": 291.04156494140625, "b": 430.7786865234375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 678]}], "orig": "Furthermore, we discovered mistakes in the data preprocessing phase during our experiments. Specifically, the data was incorrectly stripped of newlines, leading to a loss of document structure. This likely resulted in some information loss during the model training process. It is important to address this issue in future iterations to ensure the preservation of document structure and minimize information loss. These considerations highlight the importance of continuously improving the training process and addressing any limitations or errors that arise. By doing so, we can enhance the performance and reliability of Polyglot-Ko for a wide range of tasks and applications.", "text": "Furthermore, we discovered mistakes in the data preprocessing phase during our experiments. Specifically, the data was incorrectly stripped of newlines, leading to a loss of document structure. This likely resulted in some information loss during the model training process. It is important to address this issue in future iterations to ensure the preservation of document structure and minimize information loss. These considerations highlight the importance of continuously improving the training process and addressing any limitations or errors that arise. By doing so, we can enhance the performance and reliability of Polyglot-Ko for a wide range of tasks and applications."} | null |
|
1ccb7088-ec3c-46b6-b855-f4c8c703ffc5 | 2306.02254v2.pdf | section_header | 6 Conclusion | null | 152 | 24 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/86", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 7, "bbox": {"l": 70.05524444580078, "t": 417.1247253417969, "r": 145.93270874023438, "b": 405.1593933105469, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 12]}], "orig": "6 Conclusion", "text": "6 Conclusion", "level": 1} | null |
|
15bf48bc-ee31-4cff-97a0-b53d9c47e617 | 2306.02254v2.pdf | text | Currently, we are actively working on training the new version of the Polyglot Korean language model. Our aim is to expand its capacity to eventually reach 40B parameters. This process has involved significant trial and error as we strive to enhance the performance and capabilities of the model. | null | 442 | 184 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/87", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 69.84098815917969, "t": 393.0285949707031, "r": 290.94140625, "b": 301.1716613769531, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 296]}], "orig": "Currently, we are actively working on training the new version of the Polyglot Korean language model. Our aim is to expand its capacity to eventually reach 40B parameters. This process has involved significant trial and error as we strive to enhance the performance and capabilities of the model.", "text": "Currently, we are actively working on training the new version of the Polyglot Korean language model. Our aim is to expand its capacity to eventually reach 40B parameters. This process has involved significant trial and error as we strive to enhance the performance and capabilities of the model."} | null |
|
fbc6a415-07bb-41c0-ab9d-006996f0f5bb | 2306.02254v2.pdf | text | Based on our experience and expertise in developing Korean language models, we have also embarked on the creation of two types of multilingual models. The first type is an East-Asian model, which includes Korean, Chinese, Japanese, Indonesian, Malay, Vietnamese, Thai, and English. This model aims to cater to the linguistic needs of countries in the East-Asian region. | null | 442 | 211 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/88", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 69.81874084472656, "t": 297.50592041015625, "r": 291.0457763671875, "b": 192.00262451171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 369]}], "orig": "Based on our experience and expertise in developing Korean language models, we have also embarked on the creation of two types of multilingual models. The first type is an East-Asian model, which includes Korean, Chinese, Japanese, Indonesian, Malay, Vietnamese, Thai, and English. This model aims to cater to the linguistic needs of countries in the East-Asian region.", "text": "Based on our experience and expertise in developing Korean language models, we have also embarked on the creation of two types of multilingual models. The first type is an East-Asian model, which includes Korean, Chinese, Japanese, Indonesian, Malay, Vietnamese, Thai, and English. This model aims to cater to the linguistic needs of countries in the East-Asian region."} | null |
|
e0ab38db-3f4f-4c8a-9b81-67bfda1fb6f9 | 2306.02254v2.pdf | text | The second type is a Romance model, which incorporates Spanish, Portuguese, French, Romanian, and Italian. This model is designed to support the linguistic requirements of Romance languagespeaking countries. | null | 442 | 131 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/89", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 70.04915618896484, "t": 188.30364990234375, "r": 290.9413757324219, "b": 122.96966552734375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 207]}], "orig": "The second type is a Romance model, which incorporates Spanish, Portuguese, French, Romanian, and Italian. This model is designed to support the linguistic requirements of Romance languagespeaking countries.", "text": "The second type is a Romance model, which incorporates Spanish, Portuguese, French, Romanian, and Italian. This model is designed to support the linguistic requirements of Romance languagespeaking countries."} | null |
|
f6ffc10f-eaa7-4f86-bb4c-4aa706ec3a5f | 2306.02254v2.pdf | text | By developing these multilingual models, we aim to democratize and promote access to language model technology across the globe. We believe that this will contribute to the advancement of re- | null | 442 | 103 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/90", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 70.14795684814453, "t": 119.90594482421875, "r": 290.94134521484375, "b": 68.50962829589844, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 191]}], "orig": "By developing these multilingual models, we aim to democratize and promote access to language model technology across the globe. We believe that this will contribute to the advancement of re-", "text": "By developing these multilingual models, we aim to democratize and promote access to language model technology across the globe. We believe that this will contribute to the advancement of re-"} | null |
|
95dc4006-bafc-4280-ab8e-865491b1a28e | 2306.02254v2.pdf | text | search and academics in various countries, allowing users to leverage the power of language models for diverse applications and linguistic contexts. We are excited about the potential impact and benefits these models can bring to researchers, practitioners, and language enthusiasts worldwide. | null | 442 | 157 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/91", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 305.3754577636719, "t": 768.5155639648438, "r": 526.2173461914062, "b": 689.9625854492188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 293]}], "orig": "search and academics in various countries, allowing users to leverage the power of language models for diverse applications and linguistic contexts. We are excited about the potential impact and benefits these models can bring to researchers, practitioners, and language enthusiasts worldwide.", "text": "search and academics in various countries, allowing users to leverage the power of language models for diverse applications and linguistic contexts. We are excited about the potential impact and benefits these models can bring to researchers, practitioners, and language enthusiasts worldwide."} | null |
|
f7d5d1c8-56a6-43c2-b6cd-4ae745a32de7 | 2306.02254v2.pdf | section_header | Acknowledgements | null | 200 | 23 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/92", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 7, "bbox": {"l": 304.90911865234375, "t": 676.3511352539062, "r": 405.0156555175781, "b": 664.664794921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 16]}], "orig": "Acknowledgements", "text": "Acknowledgements", "level": 1} | null |
|
29f0d8f0-9a8d-4adc-8b15-82081d4fa97d | 2306.02254v2.pdf | text | We are grateful to Stability AI for generously providing the essential computing resources that were instrumental in the successful execution of this project. Their support and infrastructure were crucial for training and evaluating our models. | null | 443 | 130 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/93", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 304.8948974609375, "t": 653.7657470703125, "r": 526.22509765625, "b": 588.7447509765625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 244]}], "orig": "We are grateful to Stability AI for generously providing the essential computing resources that were instrumental in the successful execution of this project. Their support and infrastructure were crucial for training and evaluating our models.", "text": "We are grateful to Stability AI for generously providing the essential computing resources that were instrumental in the successful execution of this project. Their support and infrastructure were crucial for training and evaluating our models."} | null |
|
4d97c272-d442-4ddc-ae1f-01562d7233ed | 2306.02254v2.pdf | text | Additionally, we would like to extend our appreciation to TUNiB for their invaluable contribution in providing a large-scale Korean dataset. This dataset played a pivotal role in the development and training of our language models, and we are grateful for their collaboration and partnership. | null | 443 | 159 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/94", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 305.1488952636719, "t": 585.8397216796875, "r": 526.2244873046875, "b": 506.5741882324219, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 292]}], "orig": "Additionally, we would like to extend our appreciation to TUNiB for their invaluable contribution in providing a large-scale Korean dataset. This dataset played a pivotal role in the development and training of our language models, and we are grateful for their collaboration and partnership.", "text": "Additionally, we would like to extend our appreciation to TUNiB for their invaluable contribution in providing a large-scale Korean dataset. This dataset played a pivotal role in the development and training of our language models, and we are grateful for their collaboration and partnership."} | null |
|
e6a21584-4bef-4637-8452-c97d942e3bcf | 2306.02254v2.pdf | text | Finally, we would like to thank Stella Biderman for her valuable feedback on the paper, which greatly enhanced the quality and clarity of our work. | null | 443 | 103 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/95", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 305.1526184082031, "t": 504.4434509277344, "r": 526.2216796875, "b": 452.8046569824219, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 147]}], "orig": "Finally, we would like to thank Stella Biderman for her valuable feedback on the paper, which greatly enhanced the quality and clarity of our work.", "text": "Finally, we would like to thank Stella Biderman for her valuable feedback on the paper, which greatly enhanced the quality and clarity of our work."} | null |
|
0ab9c6f6-e824-4f21-a8a0-39d7e004ebe0 | 2306.02254v2.pdf | section_header | References | null | 112 | 23 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/96", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 7, "bbox": {"l": 305.4692077636719, "t": 426.5267028808594, "r": 361.6858825683594, "b": 415.057373046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 10]}], "orig": "References", "text": "References", "level": 1} | null |
|
f4ff7d0f-a1fa-4891-b32f-fabbd46fcae6 | 2306.02254v2.pdf | text | Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Hallahan, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Shivanshu Purohit, Tri Songz, Wang Phil, and Samuel Weinbach. 2021. GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch. | null | 442 | 153 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/97", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 304.9067077636719, "t": 407.0577392578125, "r": 525.7919311523438, "b": 330.6022033691406, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 308]}], "orig": "Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Hallahan, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Shivanshu Purohit, Tri Songz, Wang Phil, and Samuel Weinbach. 2021. GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch.", "text": "Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Hallahan, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Shivanshu Purohit, Tri Songz, Wang Phil, and Samuel Weinbach. 2021. GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch."} | null |
|
7311db7c-bcfe-48be-b17e-93d78c465c46 | 2306.02254v2.pdf | text | Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, | null | 441 | 503 | 72 | image/png | 8 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/98", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 7, "bbox": {"l": 306.0545959472656, "t": 319.34112548828125, "r": 526.2984619140625, "b": 67.96844482421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1093]}], "orig": "Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Cl\u00e9ment Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark D\u00edaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li,", "text": "Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Cl\u00e9ment Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark D\u00edaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li,"} | null |
|
4cc666b7-f7a4-4744-a2a8-9a87407b64b1 | 2306.02254v2.pdf | text | Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report. | null | 420 | 372 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/99", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 80.82398986816406, "t": 767.403076171875, "r": 290.87542724609375, "b": 581.50244140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 776]}], "orig": "Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report.", "text": "Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report."} | null |
|
447e3e63-ecbc-4f35-8297-8ed83c354eba | 2306.02254v2.pdf | text | Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. | null | 442 | 152 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/100", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 70.08699798583984, "t": 572.5798950195312, "r": 290.78399658203125, "b": 496.63507080078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 314]}], "orig": "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling.", "text": "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling."} | null |
|
69c488fd-acb7-4a7c-ae44-552bf169a640 | 2306.02254v2.pdf | text | Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. Gpt-neox-20b: An opensource autoregressive language model. | null | 442 | 151 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/101", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 70.17183685302734, "t": 487.153564453125, "r": 290.7921447753906, "b": 411.517822265625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 321]}], "orig": "Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. Gpt-neox-20b: An opensource autoregressive language model.", "text": "Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. Gpt-neox-20b: An opensource autoregressive language model."} | null |
|
30362e4f-7166-40f8-9292-6235dd4012f0 | 2306.02254v2.pdf | text | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. | null | 441 | 501 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/102", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 70.53856658935547, "t": 401.83868408203125, "r": 290.8209533691406, "b": 151.47186279296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1071]}], "orig": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.", "text": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways."} | null |
|
181be652-ab80-4e54-ae6e-b9f452929844 | 2306.02254v2.pdf | text | Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. | null | 442 | 85 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/103", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 70.18842315673828, "t": 141.381103515625, "r": 290.7839660644531, "b": 99.15707397460938, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 154]}], "orig": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.", "text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators."} | null |
|
0bc4ed44-89b8-46f6-9b5c-fc35f543afec | 2306.02254v2.pdf | text | Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset. | null | 438 | 43 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/104", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 69.83174133300781, "t": 89.34149169921875, "r": 289.1358337402344, "b": 68.07855224609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 94]}], "orig": "Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset.", "text": "Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset."} | null |
|
d2807917-fb8c-42eb-8749-924ced1fa5c7 | 2306.02254v2.pdf | text | Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. | null | 441 | 85 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/105", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.53985595703125, "t": 767.5208129882812, "r": 526.05810546875, "b": 724.73681640625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 153]}], "orig": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding.", "text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding."} | null |
|
070678e3-bf82-451c-ae69-5cdcfa50445c | 2306.02254v2.pdf | text | Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation. | null | 440 | 130 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/106", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.5957946777344, "t": 712.1463623046875, "r": 525.6582641601562, "b": 647.0357055664062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 295]}], "orig": "Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.", "text": "Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation."} | null |
|
3437200e-fdcd-42e8-863a-a3e712139455 | 2306.02254v2.pdf | text | Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. | null | 441 | 85 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/107", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.2732238769531, "t": 634.0055541992188, "r": 526.1553344726562, "b": 591.5368041992188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 189]}], "orig": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models.", "text": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models."} | null |
|
9a04cddb-5ad3-4297-a70f-731e7724f446 | 2306.02254v2.pdf | text | Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, and Eric Davis. 2022. Kobest: Korean balanced evaluation of significant tasks. | null | 441 | 63 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/108", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.624755859375, "t": 578.0872802734375, "r": 526.0599975585938, "b": 546.4119873046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 123]}], "orig": "Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, and Eric Davis. 2022. Kobest: Korean balanced evaluation of significant tasks.", "text": "Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, and Eric Davis. 2022. Kobest: Korean balanced evaluation of significant tasks."} | null |
|
2f6cf544-32db-479c-acf4-25ebbbdd1173 | 2306.02254v2.pdf | text | Ildoo Kim, Gunsoo Han, Jiyeon Ham, and Woonhyuk Baek. 2021. Kogpt: Kakaobrain korean(hangul) generative pre-trained transformer. https://github. com/kakaobrain/kogpt . | null | 445 | 85 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/109", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.6619873046875, "t": 533.46533203125, "r": 527.896484375, "b": 490.90509033203125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 167]}], "orig": "Ildoo Kim, Gunsoo Han, Jiyeon Ham, and Woonhyuk Baek. 2021. Kogpt: Kakaobrain korean(hangul) generative pre-trained transformer. https://github. com/kakaobrain/kogpt .", "text": "Ildoo Kim, Gunsoo Han, Jiyeon Ham, and Woonhyuk Baek. 2021. Kogpt: Kakaobrain korean(hangul) generative pre-trained transformer. https://github. com/kakaobrain/kogpt ."} | null |
|
5d14099a-d05a-4d8f-a59a-036aaf0de46c | 2306.02254v2.pdf | text | Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, and Xian Li. 2022. Few-shot learning with multilingual language models. | null | 441 | 173 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/110", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.4076843261719, "t": 477.416015625, "r": 526.0598754882812, "b": 390.8712463378906, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 374]}], "orig": "Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, and Xian Li. 2022. Few-shot learning with multilingual language models.", "text": "Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, and Xian Li. 2022. Few-shot learning with multilingual language models."} | null |
|
9dccf13b-dd80-41f0-af89-c99e1b5642fe | 2306.02254v2.pdf | text | Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. | null | 441 | 109 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/111", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.3963623046875, "t": 378.3531188964844, "r": 526.1514892578125, "b": 323.95159912109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 198]}], "orig": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.", "text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach."} | null |
|
e6b0eaa3-ae73-4a41-9901-9503fe691105 | 2306.02254v2.pdf | text | Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. | null | 440 | 87 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/112", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.509033203125, "t": 311.301513671875, "r": 525.6583862304688, "b": 267.95355224609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 167]}], "orig": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.", "text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9."} | null |
|
76dc2ce3-aa20-44d9-8ffc-f5a7e3541820 | 2306.02254v2.pdf | text | Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili'c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. | null | 441 | 130 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/113", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.4453125, "t": 255.43878173828125, "r": 526.153076171875, "b": 190.39105224609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 273]}], "orig": "Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili'c, Daniel Hesslow, Roman Castagn\u00e9, Alexandra Sasha Luccioni, Fran\u00e7ois Yvon, Matthias Gall\u00e9, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.", "text": "Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili'c, Daniel Hesslow, Roman Castagn\u00e9, Alexandra Sasha Luccioni, Fran\u00e7ois Yvon, Matthias Gall\u00e9, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100."} | null |
|
712ac65e-7b90-421f-b568-aa61553dceb0 | 2306.02254v2.pdf | text | MosaicML NLP Team. 2023. Introducing mpt-7b: A new standard for open-source, ly usable llms. Accessed: 2023-03-28. | null | 441 | 64 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/114", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.5268859863281, "t": 178.515380859375, "r": 526.0599975585938, "b": 146.4010772705078, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 114]}], "orig": "MosaicML NLP Team. 2023. Introducing mpt-7b: A new standard for open-source, ly usable llms. Accessed: 2023-03-28.", "text": "MosaicML NLP Team. 2023. Introducing mpt-7b: A new standard for open-source, ly usable llms. Accessed: 2023-03-28."} | null |
|
6bcb035d-3dbb-4c5c-9805-969259cdab19 | 2306.02254v2.pdf | text | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. | null | 439 | 130 | 72 | image/png | 9 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/115", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 8, "bbox": {"l": 305.7596130371094, "t": 133.58966064453125, "r": 525.6535034179688, "b": 68.61895751953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 291]}], "orig": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models.", "text": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models."} | null |
|
86ff824d-b1cf-445f-a6d5-21adf5a5a76e | 2306.02254v2.pdf | text | Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pretrained transformer language models. | null | 442 | 151 | 72 | image/png | 10 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/116", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 9, "bbox": {"l": 70.21251678466797, "t": 767.6014404296875, "r": 290.7919616699219, "b": 692.1590576171875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 331]}], "orig": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pretrained transformer language models.", "text": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pretrained transformer language models."} | null |
|
cad52701-4c9e-4f01-9f9c-774ef3588886 | 2306.02254v2.pdf | section_header | A Prompt Modification for SentiNeg Task | null | 439 | 24 | 72 | image/png | 10 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/117", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 9, "bbox": {"l": 69.82532501220703, "t": 679.5789184570312, "r": 289.35736083984375, "b": 667.5003662109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 39]}], "orig": "A Prompt Modification for SentiNeg Task", "text": "A Prompt Modification for SentiNeg Task", "level": 1} | null |
|
b5cf75ea-3afb-4dda-9283-d841caa80a17 | 2306.02254v2.pdf | text | In the case of SentiNeg, we observed that the prompt used in the KoBEST paper employed a simple classification task. However, the range of questions encompassed a wide spectrum and often exhibited ambiguity. Consequently, we made arbitrary modifications to the prompt in order to obtain results. The results of these prompt modifications are presented in the left portion of Table 5 and Figure 3a. It is evident from the results that the scores achieved with the modified prompt are substantially higher on average than those achieved with the original prompt. This demonstrates the importance of prompt design and customization in achieving better performance in the SentiNeg task. | null | 443 | 375 | 72 | image/png | 10 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/118", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 9, "bbox": {"l": 69.68687438964844, "t": 657.811767578125, "r": 291.04376220703125, "b": 470.267333984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 682]}], "orig": "In the case of SentiNeg, we observed that the prompt used in the KoBEST paper employed a simple classification task. However, the range of questions encompassed a wide spectrum and often exhibited ambiguity. Consequently, we made arbitrary modifications to the prompt in order to obtain results. The results of these prompt modifications are presented in the left portion of Table 5 and Figure 3a. It is evident from the results that the scores achieved with the modified prompt are substantially higher on average than those achieved with the original prompt. This demonstrates the importance of prompt design and customization in achieving better performance in the SentiNeg task.", "text": "In the case of SentiNeg, we observed that the prompt used in the KoBEST paper employed a simple classification task. However, the range of questions encompassed a wide spectrum and often exhibited ambiguity. Consequently, we made arbitrary modifications to the prompt in order to obtain results. The results of these prompt modifications are presented in the left portion of Table 5 and Figure 3a. It is evident from the results that the scores achieved with the modified prompt are substantially higher on average than those achieved with the original prompt. This demonstrates the importance of prompt design and customization in achieving better performance in the SentiNeg task."} | null |
|
af5cb6e8-0008-412d-a6c2-c1d6afa08a52 | 2306.02254v2.pdf | section_header | B Results for the WiC task | null | 292 | 23 | 72 | image/png | 10 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/119", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 9, "bbox": {"l": 70.12764739990234, "t": 458.3462219238281, "r": 215.92274475097656, "b": 447.0843811035156, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 26]}], "orig": "B Results for the WiC task", "text": "B Results for the WiC task", "level": 1} | null |
|
ce882876-4657-49d4-8aa1-63410acef4be | 2306.02254v2.pdf | text | The results of the WiC task are presented in the right portion of Table 5 and visualized in Figure 3b. Notably, all models demonstrated random performance in this task. We utilized the accuracy metric as it provided a more straightforward evaluation of performance, particularly for random performance. The consistent random performance across all models indicates that they encountered challenges in making accurate predictions for the WiC task. Therefore, further investigation and improvements are necessary to enhance the models' capability to effectively tackle this task. | null | 443 | 322 | 72 | image/png | 10 | application/pdf | 1.0.0 | [] | null | null | {"self_ref": "#/texts/120", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 9, "bbox": {"l": 69.71341705322266, "t": 437.3770446777344, "r": 290.94970703125, "b": 276.30596923828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 577]}], "orig": "The results of the WiC task are presented in the right portion of Table 5 and visualized in Figure 3b. Notably, all models demonstrated random performance in this task. We utilized the accuracy metric as it provided a more straightforward evaluation of performance, particularly for random performance. The consistent random performance across all models indicates that they encountered challenges in making accurate predictions for the WiC task. Therefore, further investigation and improvements are necessary to enhance the models' capability to effectively tackle this task.", "text": "The results of the WiC task are presented in the right portion of Table 5 and visualized in Figure 3b. Notably, all models demonstrated random performance in this task. We utilized the accuracy metric as it provided a more straightforward evaluation of performance, particularly for random performance. The consistent random performance across all models indicates that they encountered challenges in making accurate predictions for the WiC task. Therefore, further investigation and improvements are necessary to enhance the models' capability to effectively tackle this task."} | null |