id
stringlengths
36
36
filename
stringclasses
310 values
label
stringclasses
13 values
text
stringlengths
0
25.7k
caption_text
stringlengths
2
1.49k
image
imagewidth (px)
2
1.11k
width
int64
0
1.11k
height
int64
0
1.42k
dpi
int64
72
72
mimetype
stringclasses
1 value
page_no
int64
2
305
mime_type
stringclasses
1 value
version
stringclasses
1 value
tags
sequencelengths
0
0
properties
null
error
stringclasses
1 value
raw_response
stringlengths
255
1.07M
synced_at
null
c965977f-2ec3-4a59-929f-c01b1c46ba01
2302.06555v2.pdf
text
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748-8763. PMLR.
null
442
184
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/171", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.14116668701172, "t": 776.1381225585938, "r": 292.1780090332031, "b": 684.1265869140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 304]}], "orig": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748-8763. PMLR.", "text": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748-8763. PMLR."}
null
38e50e86-7373-4db7-b4b9-91daeb82f267
2302.06555v2.pdf
text
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
null
442
104
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/172", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.21831512451172, "t": 670.2796630859375, "r": 292.0796203613281, "b": 618.6445922851562, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 140]}], "orig": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners."}
null
c8a0f20b-de90-4b54-aebb-74be5a23c0f2
2302.06555v2.pdf
text
William J. Rapaport. 2002. Holism, conceptualrole semantics, and syntactic semantics. Minds and Machines , 12(1):3-59.
null
441
76
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/173", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.34474182128906, "t": 604.24462890625, "r": 292.07537841796875, "b": 566.3685913085938, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 118]}], "orig": "William J. Rapaport. 2002. Holism, conceptualrole semantics, and syntactic semantics. Minds and Machines , 12(1):3-59.", "text": "William J. Rapaport. 2002. Holism, conceptualrole semantics, and syntactic semantics. Minds and Machines , 12(1):3-59."}
null
a97c6f67-c753-4991-bdd7-8b9d156a3174
2302.06555v2.pdf
text
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) , 115(3):211-252.
null
441
184
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/174", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.34825897216797, "t": 551.4522705078125, "r": 292.07965087890625, "b": 459.50482177734375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 303]}], "orig": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) , 115(3):211-252.", "text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) , 115(3):211-252."}
null
ee98c152-482e-4ea3-9317-de4d3c25fe55
2302.06555v2.pdf
text
Magnus Sahlgren and Fredrik Carlsson. 2021. The singleton fallacy: Why current critiques of language models miss the point. Frontiers in Artificial Intelligence , 4(682578).
null
442
103
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/175", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.13983917236328, "t": 445.57135009765625, "r": 292.07958984375, "b": 393.9989929199219, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 173]}], "orig": "Magnus Sahlgren and Fredrik Carlsson. 2021. The singleton fallacy: Why current critiques of language models miss the point. Frontiers in Artificial Intelligence , 4(682578).", "text": "Magnus Sahlgren and Fredrik Carlsson. 2021. The singleton fallacy: Why current critiques of language models miss the point. Frontiers in Artificial Intelligence , 4(682578)."}
null
4e49d633-f5a6-4196-9ae5-474684c0c889
2302.06555v2.pdf
text
Jona Sassenhagen and Christian J. Fiebach. 2020. Traces of Meaning Itself: Encoding Distributional Word Vectors in Brain Activity. Neurobiology of Language , 1(1):54-76.
null
443
104
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/176", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 70.33506774902344, "t": 379.6624450683594, "r": 292.17694091796875, "b": 327.645751953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 169]}], "orig": "Jona Sassenhagen and Christian J. Fiebach. 2020. Traces of Meaning Itself: Encoding Distributional Word Vectors in Brain Activity. Neurobiology of Language , 1(1):54-76.", "text": "Jona Sassenhagen and Christian J. Fiebach. 2020. Traces of Meaning Itself: Encoding Distributional Word Vectors in Brain Activity. Neurobiology of Language , 1(1):54-76."}
null
d8a5a2ba-84c7-4348-bb1c-492300b31ef5
2302.06555v2.pdf
text
Peter H Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika , 31(1):1-10.
null
441
75
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/177", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.24508666992188, "t": 313.40948486328125, "r": 292.0789489746094, "b": 275.96563720703125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 114]}], "orig": "Peter H Sch\u00f6nemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika , 31(1):1-10.", "text": "Peter H Sch\u00f6nemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika , 31(1):1-10."}
null
6407cc85-86c3-4d8e-8797-3306c9f82ed9
2302.06555v2.pdf
text
Martin Schrimpf, Idan Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua Tenenbaum, and Evelina Fedorenko. 2021. The neural architecture of language: Integrative modeling converges on predictive processing. bioRxiv .
null
442
159
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/178", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.0137939453125, "t": 261.80999755859375, "r": 292.07965087890625, "b": 182.22943115234375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 244]}], "orig": "Martin Schrimpf, Idan Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua Tenenbaum, and Evelina Fedorenko. 2021. The neural architecture of language: Integrative modeling converges on predictive processing. bioRxiv .", "text": "Martin Schrimpf, Idan Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua Tenenbaum, and Evelina Fedorenko. 2021. The neural architecture of language: Integrative modeling converges on predictive processing. bioRxiv ."}
null
1064c112-f7f7-417a-8726-c292c4f057ae
2302.06555v2.pdf
text
Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan PrescottRoy, Kailyn Schmidt, Daniel L. K. Yamins, and James J. DiCarlo. 2018. Brain-score: Which artificial neural network for object recognition is most brain-like? bioRxiv .
null
442
184
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/179", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 71.23005676269531, "t": 168.40966796875, "r": 292.07965087890625, "b": 76.56763458251953, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 302]}], "orig": "Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan PrescottRoy, Kailyn Schmidt, Daniel L. K. Yamins, and James J. DiCarlo. 2018. Brain-score: Which artificial neural network for object recognition is most brain-like? bioRxiv .", "text": "Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan PrescottRoy, Kailyn Schmidt, Daniel L. K. Yamins, and James J. DiCarlo. 2018. Brain-score: Which artificial neural network for object recognition is most brain-like? bioRxiv ."}
null
3c3cdbd8-4c94-44dd-882a-edce5b01d57c
2302.06555v2.pdf
text
John R. Searle. 1980. Minds, brains, and programs. Behavioral and Brain Sciences , 3:417-424.
null
442
48
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/180", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 306.22113037109375, "t": 776.2285766601562, "r": 527.4554443359375, "b": 752.2166137695312, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 93]}], "orig": "John R. Searle. 1980. Minds, brains, and programs. Behavioral and Brain Sciences , 3:417-424.", "text": "John R. Searle. 1980. Minds, brains, and programs. Behavioral and Brain Sciences , 3:417-424."}
null
a06eb85a-0f05-4769-a0b6-08ab7a3d9394
2302.06555v2.pdf
text
Nicholas Shea. 2018. Representation in Cognitive Science . Oxford University Press.
null
438
49
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/181", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 306.5379943847656, "t": 740.894287109375, "r": 525.5486450195312, "b": 716.62060546875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 83]}], "orig": "Nicholas Shea. 2018. Representation in Cognitive Science . Oxford University Press.", "text": "Nicholas Shea. 2018. Representation in Cognitive Science . Oxford University Press."}
null
ee67461b-22b6-4d9e-b3bb-a515b35d1835
2302.06555v2.pdf
text
Anders Søgaard, Sebastian Ruder, and Ivan Vuli'c. 2018. On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proceedings of ACL 2018 .
null
442
102
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/182", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 306.6299133300781, "t": 705.385498046875, "r": 527.45263671875, "b": 654.4376220703125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 151]}], "orig": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli'c. 2018. On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proceedings of ACL 2018 .", "text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli'c. 2018. On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proceedings of ACL 2018 ."}
null
28e3a887-36a1-44fa-b63f-aaa67f90f1af
2302.06555v2.pdf
text
Ryan Teehan, Miruna Clinciu, Oleg Serikov, Eliza Szczechla, Natasha Seelam, Shachar Mirkin, and Aaron Gokaslan. 2022. Emergent structures and training dynamics in large language models. In Proceedings of BigScience Episode #5 - Workshop on Challenges & Perspectives in Creating Large Language Models , pages 146-159, virtual+Dublin. Association for Computational Linguistics.
null
440
239
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/183", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 307.2760009765625, "t": 643.467529296875, "r": 527.352783203125, "b": 523.699462890625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 375]}], "orig": "Ryan Teehan, Miruna Clinciu, Oleg Serikov, Eliza Szczechla, Natasha Seelam, Shachar Mirkin, and Aaron Gokaslan. 2022. Emergent structures and training dynamics in large language models. In Proceedings of BigScience Episode #5 - Workshop on Challenges & Perspectives in Creating Large Language Models , pages 146-159, virtual+Dublin. Association for Computational Linguistics.", "text": "Ryan Teehan, Miruna Clinciu, Oleg Serikov, Eliza Szczechla, Natasha Seelam, Shachar Mirkin, and Aaron Gokaslan. 2022. Emergent structures and training dynamics in large language models. In Proceedings of BigScience Episode #5 - Workshop on Challenges & Perspectives in Creating Large Language Models , pages 146-159, virtual+Dublin. Association for Computational Linguistics."}
null
6140df5b-cbe1-41c5-828f-82164f0fcb0d
2302.06555v2.pdf
text
Mariya Toneva and Leila Wehbe. 2019. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). Advances in neural information processing systems , 32.
null
442
130
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/184", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 306.4243469238281, "t": 512.8441162109375, "r": 527.3577270507812, "b": 447.7977294921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 210]}], "orig": "Mariya Toneva and Leila Wehbe. 2019. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). Advances in neural information processing systems , 32.", "text": "Mariya Toneva and Leila Wehbe. 2019. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). Advances in neural information processing systems , 32."}
null
4c136523-3928-4aae-a443-58c8dfafd370
2302.06555v2.pdf
text
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 .
null
441
157
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/185", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 307.0433654785156, "t": 436.8598937988281, "r": 527.4573364257812, "b": 358.6497802734375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 255]}], "orig": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 .", "text": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 ."}
null
4c15fd40-fe13-416c-87ee-592fb35ae8d8
2302.06555v2.pdf
text
Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962v2 .
null
441
129
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/186", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 307.0731506347656, "t": 347.679931640625, "r": 527.3555908203125, "b": 282.7406311035156, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 188]}], "orig": "Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962v2 .", "text": "Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962v2 ."}
null
1e975cb8-0535-42fd-9d33-f494ec518ec4
2302.06555v2.pdf
text
Ivan Vuli'c, Douwe Kiela, Stephen Clark, and Marie-Francine Moens. 2016. Multi-modal representations for improved bilingual lexicon learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 188-194, Berlin, Germany. Association for Computational Linguistics.
null
441
211
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/187", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 307.1670837402344, "t": 271.49267578125, "r": 527.3556518554688, "b": 165.977294921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 334]}], "orig": "Ivan Vuli'c, Douwe Kiela, Stephen Clark, and Marie-Francine Moens. 2016. Multi-modal representations for improved bilingual lexicon learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 188-194, Berlin, Germany. Association for Computational Linguistics.", "text": "Ivan Vuli'c, Douwe Kiela, Stephen Clark, and Marie-Francine Moens. 2016. Multi-modal representations for improved bilingual lexicon learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 188-194, Berlin, Germany. Association for Computational Linguistics."}
null
bca58df1-69ce-43fc-af73-2f4b7aa30e60
2302.06555v2.pdf
text
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language
null
442
158
72
image/png
16
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/188", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 15, "bbox": {"l": 306.69537353515625, "t": 154.9891357421875, "r": 527.3553466796875, "b": 75.83184814453125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 273]}], "orig": "Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language", "text": "Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language"}
null
e7219246-56c2-429a-8cc2-fa779175000a
2302.06555v2.pdf
text
models. Transactions on Machine Learning Research . Survey Certification.
null
420
48
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/189", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 82.18153381347656, "t": 776.12890625, "r": 292.0752868652344, "b": 751.8050537109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 73]}], "orig": "models. Transactions on Machine Learning Research . Survey Certification.", "text": "models. Transactions on Machine Learning Research . Survey Certification."}
null
588d2b28-6bec-4593-80a4-aed9acd70a37
2302.06555v2.pdf
text
Daniel Williams. 2018. Predictive processing and the representation wars. Minds and Machines , 28(1):141-172.
null
440
75
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/190", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 71.25851440429688, "t": 738.0191650390625, "r": 291.6308288574219, "b": 700.214599609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 109]}], "orig": "Daniel Williams. 2018. Predictive processing and the representation wars. Minds and Machines , 28(1):141-172.", "text": "Daniel Williams. 2018. Predictive processing and the representation wars. Minds and Machines , 28(1):141-172."}
null
0da9998f-8b79-4109-ba88-eb48c89786ca
2302.06555v2.pdf
text
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 38-45, Online. Association for Computational Linguistics.
null
442
346
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/191", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 70.77729797363281, "t": 685.7984008789062, "r": 292.0815124511719, "b": 512.7030029296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 568]}], "orig": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 38-45, Online. Association for Computational Linguistics.", "text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 38-45, Online. Association for Computational Linguistics."}
null
56e51cda-5afc-4d69-a752-3f367bd9a764
2302.06555v2.pdf
text
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, and Ping Luo. 2021. Segformer: Simple and efficient design for semantic segmentation with transformers. In Advances in Neural Information Processing Systems , volume 34, pages 12077-12090. Curran Associates, Inc.
null
441
183
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/192", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 71.51163482666016, "t": 497.9288635253906, "r": 292.17620849609375, "b": 406.5206604003906, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 282]}], "orig": "Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, and Ping Luo. 2021. Segformer: Simple and efficient design for semantic segmentation with transformers. In Advances in Neural Information Processing Systems , volume 34, pages 12077-12090. Curran Associates, Inc.", "text": "Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, and Ping Luo. 2021. Segformer: Simple and efficient design for semantic segmentation with transformers. In Advances in Neural Information Processing Systems , volume 34, pages 12077-12090. Curran Associates, Inc."}
null
706ae8aa-f25b-45e9-8676-567c7ad9a6d1
2302.06555v2.pdf
text
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models.
null
442
212
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/193", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 71.23656463623047, "t": 391.8356018066406, "r": 292.1730041503906, "b": 286.18853759765625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 332]}], "orig": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models.", "text": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models."}
null
a725105e-47f8-4002-90e8-ec3c1c284d51
2302.06555v2.pdf
text
Jiawei Zhao and Andrew Gilman. 2020. Nonlinearity in mapping based cross-lingual word embeddings. In Proceedings of the 12th Language Resources and Evaluation Conference , pages 3583-3589, Marseille, France. European Language Resources Association.
null
443
156
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/194", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 70.4427261352539, "t": 272.12591552734375, "r": 292.0821228027344, "b": 193.81298828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 248]}], "orig": "Jiawei Zhao and Andrew Gilman. 2020. Nonlinearity in mapping based cross-lingual word embeddings. In Proceedings of the 12th Language Resources and Evaluation Conference , pages 3583-3589, Marseille, France. European Language Resources Association.", "text": "Jiawei Zhao and Andrew Gilman. 2020. Nonlinearity in mapping based cross-lingual word embeddings. In Proceedings of the 12th Language Resources and Evaluation Conference , pages 3583-3589, Marseille, France. European Language Resources Association."}
null
f6343bdb-e8e3-4526-8aa4-49850563b008
2302.06555v2.pdf
text
Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2017. Scene parsing through ade20k dataset. computer vision and pattern recognition .
null
441
103
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/195", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 71.41193389892578, "t": 179.43560791015625, "r": 292.17620849609375, "b": 127.77508544921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 173]}], "orig": "Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2017. Scene parsing through ade20k dataset. computer vision and pattern recognition .", "text": "Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2017. Scene parsing through ade20k dataset. computer vision and pattern recognition ."}
null
50d045d7-48fa-4202-8bea-233c2e6ebc2d
2302.06555v2.pdf
text
Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books
null
441
77
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/196", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 71.28720092773438, "t": 114.25738525390625, "r": 292.0768127441406, "b": 75.94146728515625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 135]}], "orig": "Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books", "text": "Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books"}
null
fcb05201-b6eb-4b30-aa6c-afebdd3bc784
2302.06555v2.pdf
text
and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV) , pages 19-27.
null
421
104
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/197", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 316.86114501953125, "t": 776.248046875, "r": 527.455810546875, "b": 724.501220703125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 164]}], "orig": "and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV) , pages 19-27.", "text": "and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV) , pages 19-27."}
null
9eeb8e6b-7770-4b9f-b466-e6e512803c64
2302.06555v2.pdf
text
Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405 .
null
442
156
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/198", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 306.3648376464844, "t": 710.09228515625, "r": 527.448974609375, "b": 631.9486083984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 256]}], "orig": "Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405 .", "text": "Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405 ."}
null
e4daa604-5987-4336-9aab-71f4e67e7762
2302.06555v2.pdf
section_header
A Detailed Experimental Settings
null
360
24
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/199", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 16, "bbox": {"l": 306.2620544433594, "t": 616.3843383789062, "r": 486.592041015625, "b": 604.2142333984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 32]}], "orig": "A Detailed Experimental Settings", "text": "A Detailed Experimental Settings", "level": 1}
null
5a95bc5d-fb48-45b6-8ee9-d9030663e9e6
2302.06555v2.pdf
section_header
A.1 Computational Environment
null
325
22
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/200", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 16, "bbox": {"l": 305.94427490234375, "t": 592.9501342773438, "r": 468.4688415527344, "b": 581.759765625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 29]}], "orig": "A.1 Computational Environment", "text": "A.1 Computational Environment", "level": 1}
null
f46bd9dc-6e48-4794-a661-111c2ef3f30a
2302.06555v2.pdf
text
Our primary software toolkits included HuggingFace Transformers 4.36.2 (Wolf et al., 2020), PyTorch 2.1.2 (Paszke et al., 2019). We ran our experiments on 2 NVIDIA A100s 40G.
null
443
102
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/201", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 305.7516784667969, "t": 573.577392578125, "r": 527.3517456054688, "b": 522.483642578125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 174]}], "orig": "Our primary software toolkits included HuggingFace Transformers 4.36.2 (Wolf et al., 2020), PyTorch 2.1.2 (Paszke et al., 2019). We ran our experiments on 2 NVIDIA A100s 40G.", "text": "Our primary software toolkits included HuggingFace Transformers 4.36.2 (Wolf et al., 2020), PyTorch 2.1.2 (Paszke et al., 2019). We ran our experiments on 2 NVIDIA A100s 40G."}
null
c7b52e75-0fdb-454e-8414-d33eb838aa54
2302.06555v2.pdf
section_header
A.2 Model Details
null
186
22
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/202", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 16, "bbox": {"l": 306.1889953613281, "t": 508.5110778808594, "r": 398.79168701171875, "b": 497.6560363769531, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 17]}], "orig": "A.2 Model Details", "text": "A.2 Model Details", "level": 1}
null
0c4f575f-3be4-41f5-b3a5-0f84980a105b
2302.06555v2.pdf
text
Except for the ResNet and CLIP models, all other models used in this study are from the Huggingface Transformers (Table 7).
null
440
76
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/203", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 306.1834716796875, "t": 489.63348388671875, "r": 525.7245483398438, "b": 451.5606384277344, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 123]}], "orig": "Except for the ResNet and CLIP models, all other models used in this study are from the Huggingface Transformers (Table 7).", "text": "Except for the ResNet and CLIP models, all other models used in this study are from the Huggingface Transformers (Table 7)."}
null
2c8a251a-8e5c-4162-a86e-3f2c762f056d
2302.06555v2.pdf
caption
Table 7: Sources of models in our experiments.
null
420
22
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/204", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 16, "bbox": {"l": 310.9766845703125, "t": 211.72222900390625, "r": 520.7345581054688, "b": 200.37744140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 46]}], "orig": "Table 7: Sources of models in our experiments.", "text": "Table 7: Sources of models in our experiments."}
null
a1c64ce6-c7a3-4830-bac0-9a7d24cf9145
2302.06555v2.pdf
table
<table><tbody><tr><th>Models</th><th>Links</th></tr><tr><td>BERT$_{TINY}$ BERT$_{MINI}$ BERT$_{SMALL}$</td><td>https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 https://huggingface.co/bert-base-uncased https://huggingface.co/bert-large-uncased</td></tr><tr><td>BERT$_{MEDIUM}$</td><td></td></tr><tr><td>BERT$_{BASE}$</td><td></td></tr><tr><td>BERT$_{LARGE}$</td><td></td></tr><tr><td>GPT-2$_{BASE}$</td><td>https://huggingface.co/openai-community/gpt2</td></tr><tr><td>GPT-2$_{LARGE}$</td><td>https://huggingface.co/openai-community/gpt2-large</td></tr><tr><td>GPT-2$_{XL}$</td><td>https://huggingface.co/openai-community/gpt2-xl</td></tr><tr><td>OPT$_{125M}$</td><td>https://huggingface.co/facebook/opt-125m</td></tr><tr><td>OPT$_{6.7B}$</td><td>https://huggingface.co/facebook/opt-6.7b</td></tr><tr><td>OPT$_{30B}$</td><td>https://huggingface.co/facebook/opt-30b</td></tr><tr><td>LLaMA-2$_{7B}$</td><td>https://huggingface.co/meta-llama/Llama-2-7b</td></tr><tr><td>LLaMA-2$_{13B}$</td><td>https://huggingface.co/meta-llama/Llama-2-13b</td></tr><tr><td>SegFormer-B0</td><td>https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512</td></tr><tr><td>SegFormer-B1</td><td>https://huggingface.co/nvidia/segformer-b1-finetuned-ade-512-512</td></tr><tr><td>SegFormer-B2</td><td>https://huggingface.co/nvidia/segformer-b2-finetuned-ade-512-512</td></tr><tr><td>SegFormer-B3</td><td>https://huggingface.co/nvidia/segformer-b3-finetuned-ade-512-512</td></tr><tr><td>SegFormer-B4 SegFormer-B5</td><td>https://huggingface.co/nvidia/segformer-b4-finetuned-ade-512-512 https://huggingface.co/nvidia/segformer-b5-finetuned-ade-640-640</td></tr><tr><td>MAE$_{BASE}$ MAE$_{LARGE}$</td><td>https://huggingface.co/facebook/vit-mae-base https://huggingface.co/facebook/vit-mae-large</td></tr><tr><td>MAE$_{HUGE}$</td><td>https://huggingface.co/facebook/vit-mae-huge</td></tr><tr><td>ResNet18</td><td></td></tr><tr><td>ResNet34</td><td></td></tr><tr><td>ResNet50</td><td>https://pypi.org/project/img2vec-pytorch/</td></tr><tr><td>ResNet101</td><td></td></tr><tr><td>ResNet152</td><td></td></tr><tr><td>CLIP-RN50</td><td></td></tr><tr><td>CLIP-RN101 CLIP-RN50*64</td><td>https://github.com/openai/CLIP</td></tr><tr><td>CLIP-VIT-B-32</td><td></td></tr></tbody></table>
Table 7: Sources of models in our experiments.
437
432
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/tables/6", "parent": {"cref": "#/body"}, "children": [], "label": "table", "prov": [{"page_no": 16, "bbox": {"l": 307.3106384277344, "t": 438.501220703125, "r": 525.870361328125, "b": 222.412109375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [{"cref": "#/texts/204"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 437.0, "height": 432.0}, "uri": null}, "data": {"table_cells": [{"bbox": {"l": 310.941650390625, "t": 436.0203857421875, "r": 325.2330322265625, "b": 431.8905334472656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Models", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 428.6837463378906, "t": 436.0203857421875, "r": 439.92022705078125, "b": 431.8905334472656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Links", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 427.8594970703125, "r": 333.781982421875, "b": 412.3416442871094, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{TINY}$ BERT$_{MINI}$ BERT$_{SMALL}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 356.8112487792969, "t": 427.61602783203125, "r": 511.79302978515625, "b": 395.541748046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 https://huggingface.co/bert-base-uncased https://huggingface.co/bert-large-uncased", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 410.74261474609375, "r": 336.3897705078125, "b": 406.6357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{MEDIUM}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 405.0371398925781, "r": 330.8688049316406, "b": 400.9302673339844, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{BASE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 399.3316650390625, "r": 333.59405517578125, "b": 395.22479248046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{LARGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 391.22589111328125, "r": 331.48443603515625, "b": 387.1190490722656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "GPT-2$_{BASE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 373.55841064453125, "t": 390.982421875, "r": 495.0466003417969, "b": 387.43603515625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/openai-community/gpt2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 385.52044677734375, "r": 334.209716796875, "b": 381.41357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "GPT-2$_{LARGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 365.184814453125, "t": 385.2769470214844, "r": 503.41754150390625, "b": 381.73052978515625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/openai-community/gpt2-large", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 379.81494140625, "r": 327.497314453125, "b": 375.70806884765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 7, "end_row_offset_idx": 8, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "GPT-2$_{XL}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 369.3191833496094, "t": 379.57147216796875, "r": 499.28302001953125, "b": 376.0250549316406, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 7, "end_row_offset_idx": 8, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/openai-community/gpt2-xl", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 371.7087707519531, "r": 327.63726806640625, "b": 367.6018981933594, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 8, "end_row_offset_idx": 9, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{125M}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 379.0710144042969, "t": 371.4653015136719, "r": 489.5340576171875, "b": 367.91888427734375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 8, "end_row_offset_idx": 9, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/opt-125m", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 366.0032958984375, "r": 326.0531311035156, "b": 361.89642333984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 9, "end_row_offset_idx": 10, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{6.7B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 379.0710144042969, "t": 365.75982666015625, "r": 489.5340576171875, "b": 362.21337890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 9, "end_row_offset_idx": 10, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/opt-6.7b", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 360.29779052734375, "r": 325.2140808105469, "b": 356.19091796875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 10, "end_row_offset_idx": 11, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{30B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 380.44927978515625, "t": 360.0543212890625, "r": 488.1560363769531, "b": 356.5079040527344, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 10, "end_row_offset_idx": 11, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/opt-30b", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 352.1920471191406, "r": 333.73883056640625, "b": 348.0851745605469, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 11, "end_row_offset_idx": 12, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "LLaMA-2$_{7B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 373.34869384765625, "t": 351.9485778808594, "r": 495.2549743652344, "b": 348.40216064453125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 11, "end_row_offset_idx": 12, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/meta-llama/Llama-2-7b", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 346.486572265625, "r": 335.41693115234375, "b": 342.37969970703125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 12, "end_row_offset_idx": 13, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "LLaMA-2$_{13B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 371.9704284667969, "t": 346.24310302734375, "r": 496.6329650878906, "b": 342.6966552734375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 12, "end_row_offset_idx": 13, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/meta-llama/Llama-2-13b", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 338.38079833984375, "r": 338.0176696777344, "b": 334.27392578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 13, "end_row_offset_idx": 14, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B0", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 338.1373291015625, "r": 523.0264282226562, "b": 334.5909118652344, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 13, "end_row_offset_idx": 14, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 332.6749267578125, "r": 338.0176696777344, "b": 328.56805419921875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 14, "end_row_offset_idx": 15, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B1", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 332.4314270019531, "r": 523.0264282226562, "b": 328.885009765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 14, "end_row_offset_idx": 15, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b1-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 326.96942138671875, "r": 338.0176696777344, "b": 322.862548828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 15, "end_row_offset_idx": 16, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 326.7259521484375, "r": 523.0264282226562, "b": 323.17950439453125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 15, "end_row_offset_idx": 16, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b2-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 321.263916015625, "r": 338.0176696777344, "b": 317.15704345703125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 16, "end_row_offset_idx": 17, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B3", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 321.02044677734375, "r": 523.0264282226562, "b": 317.47406005859375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 16, "end_row_offset_idx": 17, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b3-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 315.5584716796875, "r": 338.0176696777344, "b": 305.74609375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 17, "end_row_offset_idx": 18, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B4 SegFormer-B5", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 315.3149719238281, "r": 523.0264282226562, "b": 306.06304931640625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 17, "end_row_offset_idx": 18, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b4-finetuned-ade-512-512 https://huggingface.co/nvidia/segformer-b5-finetuned-ade-640-640", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 301.7472229003906, "r": 332.33538818359375, "b": 291.9344482421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 18, "end_row_offset_idx": 19, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "MAE$_{BASE}$ MAE$_{LARGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 372.07568359375, "t": 301.50372314453125, "r": 496.52691650390625, "b": 292.25140380859375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 18, "end_row_offset_idx": 19, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/vit-mae-base https://huggingface.co/facebook/vit-mae-large", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 290.3358154296875, "r": 330.46929931640625, "b": 286.22894287109375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 19, "end_row_offset_idx": 20, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "MAE$_{HUGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 373.45355224609375, "t": 290.09234619140625, "r": 495.1484680175781, "b": 286.5459289550781, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 19, "end_row_offset_idx": 20, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/vit-mae-huge", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 282.2300720214844, "r": 329.0596923828125, "b": 278.1231994628906, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 20, "end_row_offset_idx": 21, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet18", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 20, "end_row_offset_idx": 21, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 276.52459716796875, "r": 329.0596923828125, "b": 272.417724609375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 21, "end_row_offset_idx": 22, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet34", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 21, "end_row_offset_idx": 22, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 270.819091796875, "r": 329.0596923828125, "b": 266.71221923828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 22, "end_row_offset_idx": 23, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet50", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 377.69317626953125, "t": 270.57562255859375, "r": 490.9125061035156, "b": 267.0292053222656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 22, "end_row_offset_idx": 23, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://pypi.org/project/img2vec-pytorch/", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 265.1136169433594, "r": 331.35662841796875, "b": 261.0067443847656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 23, "end_row_offset_idx": 24, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet101", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 23, "end_row_offset_idx": 24, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 259.40771484375, "r": 331.35662841796875, "b": 255.30084228515625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 24, "end_row_offset_idx": 25, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet152", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 24, "end_row_offset_idx": 25, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 251.30197143554688, "r": 333.40087890625, "b": 247.19509887695312, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 25, "end_row_offset_idx": 26, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 25, "end_row_offset_idx": 26, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.941650390625, "t": 245.5964813232422, "r": 340.2915954589844, "b": 235.78411865234375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 26, "end_row_offset_idx": 27, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN101 CLIP-RN50*64", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 392.9576416015625, "t": 239.64752197265625, "r": 475.6464538574219, "b": 236.10110473632812, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 26, "end_row_offset_idx": 27, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://github.com/openai/CLIP", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 310.9416809082031, "t": 234.18551635742188, "r": 340.3742980957031, "b": 230.07864379882812, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 27, "end_row_offset_idx": 28, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-VIT-B-32", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 27, "end_row_offset_idx": 28, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], "num_rows": 28, "num_cols": 2, "grid": [[{"bbox": {"l": 310.941650390625, "t": 436.0203857421875, "r": 325.2330322265625, "b": 431.8905334472656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Models", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 428.6837463378906, "t": 436.0203857421875, "r": 439.92022705078125, "b": 431.8905334472656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Links", "column_header": true, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 427.8594970703125, "r": 333.781982421875, "b": 412.3416442871094, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{TINY}$ BERT$_{MINI}$ BERT$_{SMALL}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 356.8112487792969, "t": 427.61602783203125, "r": 511.79302978515625, "b": 395.541748046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 https://huggingface.co/bert-base-uncased https://huggingface.co/bert-large-uncased", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 410.74261474609375, "r": 336.3897705078125, "b": 406.6357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{MEDIUM}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 405.0371398925781, "r": 330.8688049316406, "b": 400.9302673339844, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{BASE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 399.3316650390625, "r": 333.59405517578125, "b": 395.22479248046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "BERT$_{LARGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 391.22589111328125, "r": 331.48443603515625, "b": 387.1190490722656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "GPT-2$_{BASE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 373.55841064453125, "t": 390.982421875, "r": 495.0466003417969, "b": 387.43603515625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/openai-community/gpt2", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 385.52044677734375, "r": 334.209716796875, "b": 381.41357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "GPT-2$_{LARGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 365.184814453125, "t": 385.2769470214844, "r": 503.41754150390625, "b": 381.73052978515625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/openai-community/gpt2-large", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 379.81494140625, "r": 327.497314453125, "b": 375.70806884765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 7, "end_row_offset_idx": 8, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "GPT-2$_{XL}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 369.3191833496094, "t": 379.57147216796875, "r": 499.28302001953125, "b": 376.0250549316406, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 7, "end_row_offset_idx": 8, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/openai-community/gpt2-xl", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 371.7087707519531, "r": 327.63726806640625, "b": 367.6018981933594, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 8, "end_row_offset_idx": 9, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{125M}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 379.0710144042969, "t": 371.4653015136719, "r": 489.5340576171875, "b": 367.91888427734375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 8, "end_row_offset_idx": 9, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/opt-125m", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 366.0032958984375, "r": 326.0531311035156, "b": 361.89642333984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 9, "end_row_offset_idx": 10, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{6.7B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 379.0710144042969, "t": 365.75982666015625, "r": 489.5340576171875, "b": 362.21337890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 9, "end_row_offset_idx": 10, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/opt-6.7b", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 360.29779052734375, "r": 325.2140808105469, "b": 356.19091796875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 10, "end_row_offset_idx": 11, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{30B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 380.44927978515625, "t": 360.0543212890625, "r": 488.1560363769531, "b": 356.5079040527344, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 10, "end_row_offset_idx": 11, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/opt-30b", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 352.1920471191406, "r": 333.73883056640625, "b": 348.0851745605469, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 11, "end_row_offset_idx": 12, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "LLaMA-2$_{7B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 373.34869384765625, "t": 351.9485778808594, "r": 495.2549743652344, "b": 348.40216064453125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 11, "end_row_offset_idx": 12, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/meta-llama/Llama-2-7b", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 346.486572265625, "r": 335.41693115234375, "b": 342.37969970703125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 12, "end_row_offset_idx": 13, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "LLaMA-2$_{13B}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 371.9704284667969, "t": 346.24310302734375, "r": 496.6329650878906, "b": 342.6966552734375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 12, "end_row_offset_idx": 13, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/meta-llama/Llama-2-13b", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 338.38079833984375, "r": 338.0176696777344, "b": 334.27392578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 13, "end_row_offset_idx": 14, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B0", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 338.1373291015625, "r": 523.0264282226562, "b": 334.5909118652344, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 13, "end_row_offset_idx": 14, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 332.6749267578125, "r": 338.0176696777344, "b": 328.56805419921875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 14, "end_row_offset_idx": 15, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B1", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 332.4314270019531, "r": 523.0264282226562, "b": 328.885009765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 14, "end_row_offset_idx": 15, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b1-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 326.96942138671875, "r": 338.0176696777344, "b": 322.862548828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 15, "end_row_offset_idx": 16, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 326.7259521484375, "r": 523.0264282226562, "b": 323.17950439453125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 15, "end_row_offset_idx": 16, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b2-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 321.263916015625, "r": 338.0176696777344, "b": 317.15704345703125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 16, "end_row_offset_idx": 17, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B3", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 321.02044677734375, "r": 523.0264282226562, "b": 317.47406005859375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 16, "end_row_offset_idx": 17, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b3-finetuned-ade-512-512", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 315.5584716796875, "r": 338.0176696777344, "b": 305.74609375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 17, "end_row_offset_idx": 18, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B4 SegFormer-B5", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 345.5762939453125, "t": 315.3149719238281, "r": 523.0264282226562, "b": 306.06304931640625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 17, "end_row_offset_idx": 18, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/nvidia/segformer-b4-finetuned-ade-512-512 https://huggingface.co/nvidia/segformer-b5-finetuned-ade-640-640", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 301.7472229003906, "r": 332.33538818359375, "b": 291.9344482421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 18, "end_row_offset_idx": 19, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "MAE$_{BASE}$ MAE$_{LARGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 372.07568359375, "t": 301.50372314453125, "r": 496.52691650390625, "b": 292.25140380859375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 18, "end_row_offset_idx": 19, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/vit-mae-base https://huggingface.co/facebook/vit-mae-large", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 290.3358154296875, "r": 330.46929931640625, "b": 286.22894287109375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 19, "end_row_offset_idx": 20, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "MAE$_{HUGE}$", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 373.45355224609375, "t": 290.09234619140625, "r": 495.1484680175781, "b": 286.5459289550781, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 19, "end_row_offset_idx": 20, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://huggingface.co/facebook/vit-mae-huge", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 282.2300720214844, "r": 329.0596923828125, "b": 278.1231994628906, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 20, "end_row_offset_idx": 21, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet18", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 20, "end_row_offset_idx": 21, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 276.52459716796875, "r": 329.0596923828125, "b": 272.417724609375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 21, "end_row_offset_idx": 22, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet34", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 21, "end_row_offset_idx": 22, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 270.819091796875, "r": 329.0596923828125, "b": 266.71221923828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 22, "end_row_offset_idx": 23, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet50", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 377.69317626953125, "t": 270.57562255859375, "r": 490.9125061035156, "b": 267.0292053222656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 22, "end_row_offset_idx": 23, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://pypi.org/project/img2vec-pytorch/", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 265.1136169433594, "r": 331.35662841796875, "b": 261.0067443847656, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 23, "end_row_offset_idx": 24, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet101", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 23, "end_row_offset_idx": 24, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 259.40771484375, "r": 331.35662841796875, "b": 255.30084228515625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 24, "end_row_offset_idx": 25, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet152", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 24, "end_row_offset_idx": 25, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 251.30197143554688, "r": 333.40087890625, "b": 247.19509887695312, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 25, "end_row_offset_idx": 26, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 25, "end_row_offset_idx": 26, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.941650390625, "t": 245.5964813232422, "r": 340.2915954589844, "b": 235.78411865234375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 26, "end_row_offset_idx": 27, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN101 CLIP-RN50*64", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 392.9576416015625, "t": 239.64752197265625, "r": 475.6464538574219, "b": 236.10110473632812, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 26, "end_row_offset_idx": 27, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "https://github.com/openai/CLIP", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 310.9416809082031, "t": 234.18551635742188, "r": 340.3742980957031, "b": 230.07864379882812, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 27, "end_row_offset_idx": 28, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-VIT-B-32", "column_header": false, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 27, "end_row_offset_idx": 28, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "", "column_header": false, "row_header": false, "row_section": false}]]}}
null
e895876f-ed1f-4dee-b6a9-f8c381e3a053
2302.06555v2.pdf
section_header
B Dispersion Details
null
226
24
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/205", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 16, "bbox": {"l": 306.82098388671875, "t": 170.36370849609375, "r": 419.8581237792969, "b": 158.57666015625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 20]}], "orig": "B Dispersion Details", "text": "B Dispersion Details", "level": 1}
null
8c4e241e-2494-45f3-ab19-60fc77f1a40c
2302.06555v2.pdf
section_header
B.1 Image Dispersion
null
217
22
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/206", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 16, "bbox": {"l": 306.6719055175781, "t": 146.79437255859375, "r": 415.15606689453125, "b": 136.0662841796875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 20]}], "orig": "B.1 Image Dispersion", "text": "B.1 Image Dispersion", "level": 1}
null
0335479c-87ac-4855-90cd-daf4e8ca77a3
2302.06555v2.pdf
text
The image dispersion d of a concept alias a is defined as the average pairwise cosine distance between all the image representations i$_{1}$, i$_{2}$...i$_{n}$ in the set of n images for a given alias (Kiela et al.,
null
442
103
72
image/png
17
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/207", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 16, "bbox": {"l": 306.2406921386719, "t": 127.46954345703125, "r": 527.3583984375, "b": 75.8514404296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 215]}], "orig": "The image dispersion d of a concept alias a is defined as the average pairwise cosine distance between all the image representations i$_{1}$, i$_{2}$...i$_{n}$ in the set of n images for a given alias (Kiela et al.,", "text": "The image dispersion d of a concept alias a is defined as the average pairwise cosine distance between all the image representations i$_{1}$, i$_{2}$...i$_{n}$ in the set of n images for a given alias (Kiela et al.,"}
null
650a2730-51c7-412d-a473-4a58f4a6ad31
2302.06555v2.pdf
caption
Table 8: The cumulative of explained variance ratios for different models and sizes.
null
439
48
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/208", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 17, "bbox": {"l": 71.0458984375, "t": 701.2876586914062, "r": 290.2685241699219, "b": 677.588623046875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 84]}], "orig": "Table 8: The cumulative of explained variance ratios for different models and sizes.", "text": "Table 8: The cumulative of explained variance ratios for different models and sizes."}
null
68880cdd-02ce-461a-a416-0704b0d907e1
2302.06555v2.pdf
table
<table><tbody><tr><td>Model</td><th colspan="7">Explained Variance Ratio (Sum)</th></tr><tr><td></td><th>256</th><th>512</th><th>768</th><th>1024</th><th>1280</th><th>2048</th><th>Max</th></tr><tr><td>MAE$_{Huge}$</td><td>0.9735</td><td>0.9922</td><td>0.9975</td><td>0.9994</td><td>1.0000</td><td>-</td><td>1.0000</td></tr><tr><td>ResNet152</td><td>0.9795</td><td>0.9942</td><td>0.9974</td><td>0.9987</td><td>0.9993</td><td>1.0000</td><td>1.0000</td></tr><tr><td>SegFormer-B5</td><td>0.9685</td><td>1.0000</td><td>-</td><td>-</td><td>-</td><td>-</td><td>1.0000</td></tr><tr><td>LLaMA-2$_{13B}$</td><td>0.5708</td><td>0.6662</td><td>0.7277</td><td>0.7725</td><td>0.8077</td><td>0.8814</td><td>1.0000</td></tr><tr><td>OPT$_{30B}$</td><td>0.4926</td><td>0.6002</td><td>0.6664</td><td>0.7164</td><td>0.7554</td><td>0.8360</td><td>1.0000</td></tr></tbody></table>
Table 8: The cumulative of explained variance ratios for different models and sizes.
436
136
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/tables/7", "parent": {"cref": "#/body"}, "children": [], "label": "table", "prov": [{"page_no": 17, "bbox": {"l": 72.62456512451172, "t": 779.7711181640625, "r": 290.5558166503906, "b": 712.1237182617188, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [{"cref": "#/texts/208"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 436.0, "height": 136.0}, "uri": null}, "data": {"table_cells": [{"bbox": {"l": 77.09851837158203, "t": 771.510986328125, "r": 93.7813949584961, "b": 765.798828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Model", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}, {"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 125.7531967163086, "t": 764.3327026367188, "r": 135.33738708496094, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "256", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 150.32705688476562, "t": 764.3327026367188, "r": 159.9112548828125, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "512", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 174.9009246826172, "t": 764.3327026367188, "r": 184.485107421875, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "768", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 197.87741088867188, "t": 764.3327026367188, "r": 210.65634155273438, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "1024", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 222.45127868652344, "t": 764.3327026367188, "r": 235.23019409179688, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "1280", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 247.0187530517578, "t": 764.3327026367188, "r": 259.79766845703125, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "2048", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 272.12933349609375, "t": 764.3327026367188, "r": 283.8411865234375, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "Max", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 77.09851837158203, "t": 753.0579833984375, "r": 101.40696716308594, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "MAE$_{Huge}$", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.75990295410156, "t": 753.0579833984375, "r": 139.33091735839844, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.9735", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33377075195312, "t": 753.0579833984375, "r": 163.90478515625, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.9922", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90762329101562, "t": 753.0579833984375, "r": 188.4786376953125, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.9975", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48147583007812, "t": 753.0579833984375, "r": 213.052490234375, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.9994", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.05532836914062, "t": 753.0579833984375, "r": 237.62635803222656, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 252.34765625, "t": 753.0579833984375, "r": 254.475341796875, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1966552734375, "t": 753.0579833984375, "r": 286.76763916015625, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 77.09852600097656, "t": 745.122314453125, "r": 105.49327087402344, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet152", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.76083374023438, "t": 745.122314453125, "r": 139.33184814453125, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.9795", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33470153808594, "t": 745.122314453125, "r": 163.9057159423828, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.9942", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90855407714844, "t": 745.122314453125, "r": 188.4795684814453, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.9974", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48240661621094, "t": 745.122314453125, "r": 213.0534210205078, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.9987", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.0498809814453, "t": 745.122314453125, "r": 237.6208953857422, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "0.9993", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 244.6237335205078, "t": 745.122314453125, "r": 262.19476318359375, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1976013183594, "t": 745.122314453125, "r": 286.7685852050781, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 77.09852600097656, "t": 737.1866455078125, "r": 114.75800323486328, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B5", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.76083374023438, "t": 737.1866455078125, "r": 139.33184814453125, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.9685", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33470153808594, "t": 737.1866455078125, "r": 163.9057159423828, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 178.6270294189453, "t": 737.1866455078125, "r": 180.7547149658203, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 203.20089721679688, "t": 737.1866455078125, "r": 205.32858276367188, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 227.77474975585938, "t": 737.1866455078125, "r": 229.90243530273438, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 252.34860229492188, "t": 737.1866455078125, "r": 254.47628784179688, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1976013183594, "t": 737.1866455078125, "r": 286.7685852050781, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 77.09852600097656, "t": 729.2509765625, "r": 111.14067077636719, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "LLaMA-2$_{13B}$", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.75990295410156, "t": 729.2509765625, "r": 139.33091735839844, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.5708", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33377075195312, "t": 729.2509765625, "r": 163.90478515625, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.6662", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90762329101562, "t": 729.2509765625, "r": 188.4786376953125, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.7277", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48147583007812, "t": 729.2509765625, "r": 213.052490234375, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.7725", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.05532836914062, "t": 729.2509765625, "r": 237.62635803222656, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "0.8077", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 244.622802734375, "t": 729.2509765625, "r": 262.1938171386719, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "0.8814", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1966552734375, "t": 729.2509765625, "r": 286.7676696777344, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 77.09852600097656, "t": 721.3153076171875, "r": 96.94973754882812, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{30B}$", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.75990295410156, "t": 721.3153076171875, "r": 139.33091735839844, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.4926", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33377075195312, "t": 721.3153076171875, "r": 163.90478515625, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.6002", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90762329101562, "t": 721.3153076171875, "r": 188.4786376953125, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.6664", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48147583007812, "t": 721.3153076171875, "r": 213.052490234375, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.7164", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.05532836914062, "t": 721.3153076171875, "r": 237.62635803222656, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "0.7554", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 244.622802734375, "t": 721.3153076171875, "r": 262.1938171386719, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "0.8360", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1966552734375, "t": 721.3153076171875, "r": 286.7676696777344, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}], "num_rows": 7, "num_cols": 8, "grid": [[{"bbox": {"l": 77.09851837158203, "t": 771.510986328125, "r": 93.7813949584961, "b": 765.798828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Model", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 162.82977294921875, "t": 775.478515625, "r": 245.70103454589844, "b": 769.766357421875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 7, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 8, "text": "Explained Variance Ratio (Sum)", "column_header": true, "row_header": false, "row_section": false}], [{"bbox": null, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 125.7531967163086, "t": 764.3327026367188, "r": 135.33738708496094, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "256", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 150.32705688476562, "t": 764.3327026367188, "r": 159.9112548828125, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "512", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 174.9009246826172, "t": 764.3327026367188, "r": 184.485107421875, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "768", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 197.87741088867188, "t": 764.3327026367188, "r": 210.65634155273438, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "1024", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 222.45127868652344, "t": 764.3327026367188, "r": 235.23019409179688, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "1280", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 247.0187530517578, "t": 764.3327026367188, "r": 259.79766845703125, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "2048", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 272.12933349609375, "t": 764.3327026367188, "r": 283.8411865234375, "b": 758.6204833984375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "Max", "column_header": true, "row_header": false, "row_section": false}], [{"bbox": {"l": 77.09851837158203, "t": 753.0579833984375, "r": 101.40696716308594, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "MAE$_{Huge}$", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.75990295410156, "t": 753.0579833984375, "r": 139.33091735839844, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.9735", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33377075195312, "t": 753.0579833984375, "r": 163.90478515625, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.9922", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90762329101562, "t": 753.0579833984375, "r": 188.4786376953125, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.9975", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48147583007812, "t": 753.0579833984375, "r": 213.052490234375, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.9994", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.05532836914062, "t": 753.0579833984375, "r": 237.62635803222656, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 252.34765625, "t": 753.0579833984375, "r": 254.475341796875, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1966552734375, "t": 753.0579833984375, "r": 286.76763916015625, "b": 747.3458251953125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 77.09852600097656, "t": 745.122314453125, "r": 105.49327087402344, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "ResNet152", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.76083374023438, "t": 745.122314453125, "r": 139.33184814453125, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.9795", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33470153808594, "t": 745.122314453125, "r": 163.9057159423828, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.9942", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90855407714844, "t": 745.122314453125, "r": 188.4795684814453, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.9974", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48240661621094, "t": 745.122314453125, "r": 213.0534210205078, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.9987", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.0498809814453, "t": 745.122314453125, "r": 237.6208953857422, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "0.9993", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 244.6237335205078, "t": 745.122314453125, "r": 262.19476318359375, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1976013183594, "t": 745.122314453125, "r": 286.7685852050781, "b": 739.41015625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 77.09852600097656, "t": 737.1866455078125, "r": 114.75800323486328, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "SegFormer-B5", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.76083374023438, "t": 737.1866455078125, "r": 139.33184814453125, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.9685", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33470153808594, "t": 737.1866455078125, "r": 163.9057159423828, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 178.6270294189453, "t": 737.1866455078125, "r": 180.7547149658203, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 203.20089721679688, "t": 737.1866455078125, "r": 205.32858276367188, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 227.77474975585938, "t": 737.1866455078125, "r": 229.90243530273438, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 252.34860229492188, "t": 737.1866455078125, "r": 254.47628784179688, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "-", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1976013183594, "t": 737.1866455078125, "r": 286.7685852050781, "b": 731.4744873046875, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 77.09852600097656, "t": 729.2509765625, "r": 111.14067077636719, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "LLaMA-2$_{13B}$", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.75990295410156, "t": 729.2509765625, "r": 139.33091735839844, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.5708", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33377075195312, "t": 729.2509765625, "r": 163.90478515625, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.6662", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90762329101562, "t": 729.2509765625, "r": 188.4786376953125, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.7277", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48147583007812, "t": 729.2509765625, "r": 213.052490234375, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.7725", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.05532836914062, "t": 729.2509765625, "r": 237.62635803222656, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "0.8077", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 244.622802734375, "t": 729.2509765625, "r": 262.1938171386719, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "0.8814", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1966552734375, "t": 729.2509765625, "r": 286.7676696777344, "b": 723.538818359375, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 77.09852600097656, "t": 721.3153076171875, "r": 96.94973754882812, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "OPT$_{30B}$", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 121.75990295410156, "t": 721.3153076171875, "r": 139.33091735839844, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "0.4926", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.33377075195312, "t": 721.3153076171875, "r": 163.90478515625, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "0.6002", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 170.90762329101562, "t": 721.3153076171875, "r": 188.4786376953125, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "0.6664", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 195.48147583007812, "t": 721.3153076171875, "r": 213.052490234375, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "0.7164", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 220.05532836914062, "t": 721.3153076171875, "r": 237.62635803222656, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "0.7554", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 244.622802734375, "t": 721.3153076171875, "r": 262.1938171386719, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 6, "end_col_offset_idx": 7, "text": "0.8360", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 269.1966552734375, "t": 721.3153076171875, "r": 286.7676696777344, "b": 715.6031494140625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 7, "end_col_offset_idx": 8, "text": "1.0000", "column_header": false, "row_header": false, "row_section": false}]]}}
null
2911d964-5fb7-416b-a46b-8c4364f9de45
2302.06555v2.pdf
caption
Table 9: Evaluation of POS impact on OPT$_{30B}$ and different CLIP models using EN-CLDI set. "Mix" denotes a combination of all POS categories.
null
439
75
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/209", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 17, "bbox": {"l": 70.96319580078125, "t": 562.115234375, "r": 290.2682189941406, "b": 524.6176147460938, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 144]}], "orig": "Table 9: Evaluation of POS impact on OPT$_{30B}$ and different CLIP models using EN-CLDI set. \"Mix\" denotes a combination of all POS categories.", "text": "Table 9: Evaluation of POS impact on OPT$_{30B}$ and different CLIP models using EN-CLDI set. \"Mix\" denotes a combination of all POS categories."}
null
363e8eaa-884e-4e93-901e-40ff23cf1b4a
2302.06555v2.pdf
table
<table><tbody><tr><td>Models</td><th>Train</th><th>Test</th><th>P@1</th><th>P@10</th><th>P@100</th></tr><tr><td>CLIP-ViT-L</td><td>Noun</td><td>Adj.</td><td>12.7</td><td>52.2</td><td>85.4</td></tr><tr><td>CLIP-RN50x64</td><td>1337</td><td>157</td><td>7.0</td><td>45.2</td><td>84.7</td></tr><tr><td>CLIP-ViT-L</td><td>Noun</td><td>Verb.</td><td>12.2</td><td>55.1</td><td>93.9</td></tr><tr><td>CLIP-RN50x64</td><td>1337</td><td>196</td><td>9.2</td><td>46.4</td><td>89.3</td></tr><tr><td>CLIP-ViT-L</td><td>Mix</td><td>Mix.</td><td>39.1</td><td>81.0</td><td>94.1</td></tr><tr><td>CLIP-RN50x64</td><td>1337</td><td>353</td><td>33.7</td><td>79.9</td><td>93.8</td></tr></tbody></table>
Table 9: Evaluation of POS impact on OPT$_{30B}$ and different CLIP models using EN-CLDI set. "Mix" denotes a combination of all POS categories.
414
185
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/tables/8", "parent": {"cref": "#/body"}, "children": [], "label": "table", "prov": [{"page_no": 17, "bbox": {"l": 78.54300689697266, "t": 664.9376831054688, "r": 285.4424743652344, "b": 572.4727783203125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [{"cref": "#/texts/209"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 414.0, "height": 185.0}, "uri": null}, "data": {"table_cells": [{"bbox": {"l": 84.09164428710938, "t": 659.4486083984375, "r": 109.04653930664062, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Models", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 659.4486083984375, "r": 164.38265991210938, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Train", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49951171875, "t": 659.4486083984375, "r": 187.24134826660156, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Test", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 200.21047973632812, "t": 659.4486083984375, "r": 216.65576171875, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "P@1", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 225.77259826660156, "t": 659.4486083984375, "r": 246.3770294189453, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "P@10", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 255.49388122558594, "t": 659.4486083984375, "r": 280.2574462890625, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "P@100", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 84.09164428710938, "t": 644.77099609375, "r": 124.87626647949219, "b": 637.3344116210938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-ViT-L", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 644.77099609375, "r": 164.7699432373047, "b": 637.3344116210938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Noun", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49899291992188, "t": 644.77099609375, "r": 188.05601501464844, "b": 637.3344116210938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Adj.", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 644.870849609375, "r": 216.6554718017578, "b": 637.3926391601562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "12.7", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 644.870849609375, "r": 246.3767547607422, "b": 637.3926391601562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "52.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 644.870849609375, "r": 280.2571716308594, "b": 637.3926391601562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "85.4", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 84.09164428710938, "t": 634.439697265625, "r": 137.2372589111328, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50x64", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.03262329101562, "t": 634.439697265625, "r": 162.66921997070312, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1337", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 172.87449645996094, "t": 634.439697265625, "r": 185.35194396972656, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "157", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 206.25718688964844, "t": 634.439697265625, "r": 216.65505981445312, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "7.0", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.81932067871094, "t": 634.439697265625, "r": 246.37632751464844, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "45.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.6997375488281, "t": 634.439697265625, "r": 280.2567443847656, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "84.7", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 84.09164428710938, "t": 619.76220703125, "r": 124.87626647949219, "b": 612.3256225585938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-ViT-L", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 619.76220703125, "r": 164.7699432373047, "b": 612.3256225585938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Noun", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49899291992188, "t": 619.76220703125, "r": 190.95079040527344, "b": 612.3256225585938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Verb.", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 619.862060546875, "r": 216.6554718017578, "b": 612.3838500976562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "12.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 619.862060546875, "r": 246.3767547607422, "b": 612.3838500976562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "55.1", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 619.862060546875, "r": 280.2571716308594, "b": 612.3838500976562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "93.9", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 84.09164428710938, "t": 609.430908203125, "r": 137.2372589111328, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50x64", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.03262329101562, "t": 609.430908203125, "r": 162.66921997070312, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1337", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 172.87449645996094, "t": 609.430908203125, "r": 185.35194396972656, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "196", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 206.25718688964844, "t": 609.430908203125, "r": 216.65505981445312, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "9.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.81932067871094, "t": 609.430908203125, "r": 246.37632751464844, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "46.4", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.6997375488281, "t": 609.430908203125, "r": 280.2567443847656, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "89.3", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 84.09164428710938, "t": 594.7526245117188, "r": 124.87626647949219, "b": 587.3160400390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-ViT-L", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 594.7526245117188, "r": 160.52296447753906, "b": 587.3160400390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Mix", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49900817871094, "t": 594.7526245117188, "r": 189.44517517089844, "b": 587.3160400390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Mix.", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 594.8524169921875, "r": 216.6554718017578, "b": 587.374267578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "39.1", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 594.8524169921875, "r": 246.3767547607422, "b": 587.374267578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "81.0", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 594.8524169921875, "r": 280.2571716308594, "b": 587.374267578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "94.1", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 84.09164428710938, "t": 584.42138671875, "r": 137.2372589111328, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50x64", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.0326385498047, "t": 584.42138671875, "r": 162.66921997070312, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1337", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49900817871094, "t": 584.42138671875, "r": 185.9764404296875, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "353", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 584.42138671875, "r": 216.6554718017578, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "33.7", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 584.42138671875, "r": 246.3767547607422, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "79.9", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 584.42138671875, "r": 280.2571716308594, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "93.8", "column_header": false, "row_header": false, "row_section": false}], "num_rows": 7, "num_cols": 6, "grid": [[{"bbox": {"l": 84.09164428710938, "t": 659.4486083984375, "r": 109.04653930664062, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "Models", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 659.4486083984375, "r": 164.38265991210938, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Train", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49951171875, "t": 659.4486083984375, "r": 187.24134826660156, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Test", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 200.21047973632812, "t": 659.4486083984375, "r": 216.65576171875, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "P@1", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 225.77259826660156, "t": 659.4486083984375, "r": 246.3770294189453, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "P@10", "column_header": true, "row_header": false, "row_section": false}, {"bbox": {"l": 255.49388122558594, "t": 659.4486083984375, "r": 280.2574462890625, "b": 652.011962890625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 0, "end_row_offset_idx": 1, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "P@100", "column_header": true, "row_header": false, "row_section": false}], [{"bbox": {"l": 84.09164428710938, "t": 644.77099609375, "r": 124.87626647949219, "b": 637.3344116210938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-ViT-L", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 644.77099609375, "r": 164.7699432373047, "b": 637.3344116210938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Noun", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49899291992188, "t": 644.77099609375, "r": 188.05601501464844, "b": 637.3344116210938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Adj.", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 644.870849609375, "r": 216.6554718017578, "b": 637.3926391601562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "12.7", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 644.870849609375, "r": 246.3767547607422, "b": 637.3926391601562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "52.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 644.870849609375, "r": 280.2571716308594, "b": 637.3926391601562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 1, "end_row_offset_idx": 2, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "85.4", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 84.09164428710938, "t": 634.439697265625, "r": 137.2372589111328, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50x64", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.03262329101562, "t": 634.439697265625, "r": 162.66921997070312, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1337", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 172.87449645996094, "t": 634.439697265625, "r": 185.35194396972656, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "157", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 206.25718688964844, "t": 634.439697265625, "r": 216.65505981445312, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "7.0", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.81932067871094, "t": 634.439697265625, "r": 246.37632751464844, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "45.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.6997375488281, "t": 634.439697265625, "r": 280.2567443847656, "b": 627.003173828125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 2, "end_row_offset_idx": 3, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "84.7", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 84.09164428710938, "t": 619.76220703125, "r": 124.87626647949219, "b": 612.3256225585938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-ViT-L", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 619.76220703125, "r": 164.7699432373047, "b": 612.3256225585938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Noun", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49899291992188, "t": 619.76220703125, "r": 190.95079040527344, "b": 612.3256225585938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Verb.", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 619.862060546875, "r": 216.6554718017578, "b": 612.3838500976562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "12.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 619.862060546875, "r": 246.3767547607422, "b": 612.3838500976562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "55.1", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 619.862060546875, "r": 280.2571716308594, "b": 612.3838500976562, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 3, "end_row_offset_idx": 4, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "93.9", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 84.09164428710938, "t": 609.430908203125, "r": 137.2372589111328, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50x64", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.03262329101562, "t": 609.430908203125, "r": 162.66921997070312, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1337", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 172.87449645996094, "t": 609.430908203125, "r": 185.35194396972656, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "196", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 206.25718688964844, "t": 609.430908203125, "r": 216.65505981445312, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "9.2", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.81932067871094, "t": 609.430908203125, "r": 246.37632751464844, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "46.4", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.6997375488281, "t": 609.430908203125, "r": 280.2567443847656, "b": 601.994384765625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 4, "end_row_offset_idx": 5, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "89.3", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 84.09164428710938, "t": 594.7526245117188, "r": 124.87626647949219, "b": 587.3160400390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-ViT-L", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.65635681152344, "t": 594.7526245117188, "r": 160.52296447753906, "b": 587.3160400390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "Mix", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49900817871094, "t": 594.7526245117188, "r": 189.44517517089844, "b": 587.3160400390625, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "Mix.", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 594.8524169921875, "r": 216.6554718017578, "b": 587.374267578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "39.1", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 594.8524169921875, "r": 246.3767547607422, "b": 587.374267578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "81.0", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 594.8524169921875, "r": 280.2571716308594, "b": 587.374267578125, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 5, "end_row_offset_idx": 6, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "94.1", "column_header": false, "row_header": false, "row_section": false}], [{"bbox": {"l": 84.09164428710938, "t": 584.42138671875, "r": 137.2372589111328, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 0, "end_col_offset_idx": 1, "text": "CLIP-RN50x64", "column_header": false, "row_header": true, "row_section": false}, {"bbox": {"l": 146.0326385498047, "t": 584.42138671875, "r": 162.66921997070312, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 1, "end_col_offset_idx": 2, "text": "1337", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 173.49900817871094, "t": 584.42138671875, "r": 185.9764404296875, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 2, "end_col_offset_idx": 3, "text": "353", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 202.09844970703125, "t": 584.42138671875, "r": 216.6554718017578, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 3, "end_col_offset_idx": 4, "text": "33.7", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 231.8197479248047, "t": 584.42138671875, "r": 246.3767547607422, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 4, "end_col_offset_idx": 5, "text": "79.9", "column_header": false, "row_header": false, "row_section": false}, {"bbox": {"l": 265.7001647949219, "t": 584.42138671875, "r": 280.2571716308594, "b": 576.9848022460938, "coord_origin": "BOTTOMLEFT"}, "row_span": 1, "col_span": 1, "start_row_offset_idx": 6, "end_row_offset_idx": 7, "start_col_offset_idx": 5, "end_col_offset_idx": 6, "text": "93.8", "column_header": false, "row_header": false, "row_section": false}]]}}
null
e30d4f0b-48c9-47d0-b5e9-d7c3621543b6
2302.06555v2.pdf
text
2015):
null
58
22
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/210", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 71.56356811523438, "t": 500.7362060546875, "r": 100.48365783691406, "b": 490.0966491699219, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 6]}], "orig": "2015):", "text": "2015):"}
null
f914a2ae-988b-4f2b-ba11-3d4d5e7734db
2302.06555v2.pdf
formula
d ( a ) = 2 n ( n - 1) ∑ k<j ≤ n 1 - i$_{j}$ · i$_{k}$ | i$_{j}$ || i$_{k}$ |
null
316
61
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/211", "parent": {"cref": "#/body"}, "children": [], "label": "formula", "prov": [{"page_no": 17, "bbox": {"l": 101.1298599243164, "t": 480.173828125, "r": 258.9724426269531, "b": 449.6528015136719, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 77]}], "orig": "d ( a ) = 2 n ( n - 1) \u2211 k<j \u2264 n 1 - i$_{j}$ \u00b7 i$_{k}$ | i$_{j}$ || i$_{k}$ |", "text": "d ( a ) = 2 n ( n - 1) \u2211 k<j \u2264 n 1 - i$_{j}$ \u00b7 i$_{k}$ | i$_{j}$ || i$_{k}$ |"}
null
5de5fea0-60d0-4c9e-bade-68e854c61177
2302.06555v2.pdf
section_header
B.2 Language Dispersion
null
251
22
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/212", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 17, "bbox": {"l": 71.37012481689453, "t": 438.3519592285156, "r": 196.8655548095703, "b": 427.3279724121094, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 23]}], "orig": "B.2 Language Dispersion", "text": "B.2 Language Dispersion", "level": 1}
null
72c4b925-088f-49e8-bd4b-074758cd8349
2302.06555v2.pdf
text
The language dispersion d of a concept alias a is defined as the average pairwise cosine distance between all the corresponding word representations w$_{1}$, w$_{2}$...w$_{n}$ in the set of n sentences for a given alias:
null
438
129
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/213", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 71.2419662475586, "t": 420.04296875, "r": 290.2678527832031, "b": 355.6986389160156, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 220]}], "orig": "The language dispersion d of a concept alias a is defined as the average pairwise cosine distance between all the corresponding word representations w$_{1}$, w$_{2}$...w$_{n}$ in the set of n sentences for a given alias:", "text": "The language dispersion d of a concept alias a is defined as the average pairwise cosine distance between all the corresponding word representations w$_{1}$, w$_{2}$...w$_{n}$ in the set of n sentences for a given alias:"}
null
fbf92b92-9314-483e-80a9-29e9f8d163e0
2302.06555v2.pdf
formula
d ( a ) = 2 n ( n - 1) ∑ k<j ≤ n 1 - w$_{j}$ · w$_{k}$ | w$_{j}$ || w$_{k}$ |
null
332
64
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/214", "parent": {"cref": "#/body"}, "children": [], "label": "formula", "prov": [{"page_no": 17, "bbox": {"l": 97.32268524169922, "t": 347.8448181152344, "r": 263.3014831542969, "b": 316.15997314453125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 77]}], "orig": "d ( a ) = 2 n ( n - 1) \u2211 k<j \u2264 n 1 - w$_{j}$ \u00b7 w$_{k}$ | w$_{j}$ || w$_{k}$ |", "text": "d ( a ) = 2 n ( n - 1) \u2211 k<j \u2264 n 1 - w$_{j}$ \u00b7 w$_{k}$ | w$_{j}$ || w$_{k}$ |"}
null
573f0c41-b9c7-49a3-9471-51991b8ec111
2302.06555v2.pdf
section_header
C More Results
null
177
24
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/215", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 17, "bbox": {"l": 71.76544189453125, "t": 305.74468994140625, "r": 160.4810333251953, "b": 294.17938232421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 14]}], "orig": "C More Results", "text": "C More Results", "level": 1}
null
8d5a5666-a319-4d6e-96ce-f284bd4456eb
2302.06555v2.pdf
text
Cumulative percentage of variance explained. In Table 8, we present the cumulative percentage of variance explained by each selected component after PCA.
null
441
102
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/216", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 71.33179473876953, "t": 284.09747314453125, "r": 292.175537109375, "b": 232.8676300048828, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 153]}], "orig": "Cumulative percentage of variance explained. In Table 8, we present the cumulative percentage of variance explained by each selected component after PCA.", "text": "Cumulative percentage of variance explained. In Table 8, we present the cumulative percentage of variance explained by each selected component after PCA."}
null
ff749416-42e2-4a87-9843-56853ee3018a
2302.06555v2.pdf
text
CLIP Results. We also investigate the effects of incorporating text signals during vision pretraining by comparing pure vision models against selected CLIP (Radford et al., 2021) vision encoders (ResNet50, ResNet101, ResNet50x60, ViTBase-Patch32, and ViT-Large-Patch14). The results align with our expectations, indicating that the CLIP vision encoders exhibit better alignment with LMs. The findings also support our previous observation that larger LMs tend to demonstrate better alignment. However, it would be unfair to
null
442
294
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/217", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 71.11685943603516, "t": 222.97711181640625, "r": 292.077880859375, "b": 76.12005615234375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 523]}], "orig": "CLIP Results. We also investigate the effects of incorporating text signals during vision pretraining by comparing pure vision models against selected CLIP (Radford et al., 2021) vision encoders (ResNet50, ResNet101, ResNet50x60, ViTBase-Patch32, and ViT-Large-Patch14). The results align with our expectations, indicating that the CLIP vision encoders exhibit better alignment with LMs. The findings also support our previous observation that larger LMs tend to demonstrate better alignment. However, it would be unfair to", "text": "CLIP Results. We also investigate the effects of incorporating text signals during vision pretraining by comparing pure vision models against selected CLIP (Radford et al., 2021) vision encoders (ResNet50, ResNet101, ResNet50x60, ViTBase-Patch32, and ViT-Large-Patch14). The results align with our expectations, indicating that the CLIP vision encoders exhibit better alignment with LMs. The findings also support our previous observation that larger LMs tend to demonstrate better alignment. However, it would be unfair to"}
null
807b210d-f163-47ed-bbd5-9f2d5e0cb2be
2302.06555v2.pdf
caption
Figure 9: Illustrating the impact of scaling CLIP models up on Exclude-1K set. The incremental growth in P@100 for scaled-up CLIP models is marginal, contrasting with the more substantial increase observed when scaling up LMs in the same family.
null
442
157
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/218", "parent": {"cref": "#/body"}, "children": [], "label": "caption", "prov": [{"page_no": 17, "bbox": {"l": 306.6195983886719, "t": 421.1622009277344, "r": 527.356201171875, "b": 342.441650390625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 245]}], "orig": "Figure 9: Illustrating the impact of scaling CLIP models up on Exclude-1K set. The incremental growth in P@100 for scaled-up CLIP models is marginal, contrasting with the more substantial increase observed when scaling up LMs in the same family.", "text": "Figure 9: Illustrating the impact of scaling CLIP models up on Exclude-1K set. The incremental growth in P@100 for scaled-up CLIP models is marginal, contrasting with the more substantial increase observed when scaling up LMs in the same family."}
null
5962a406-fd03-46b6-8e01-61b889bd56df
2302.06555v2.pdf
picture
null
Figure 9: Illustrating the impact of scaling CLIP models up on Exclude-1K set. The incremental growth in P@100 for scaled-up CLIP models is marginal, contrasting with the more substantial increase observed when scaling up LMs in the same family.
390
317
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/23", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 17, "bbox": {"l": 317.4569091796875, "t": 768.2399291992188, "r": 512.4075927734375, "b": 609.8652954101562, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 245]}], "captions": [{"cref": "#/texts/218"}], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 390.0, "height": 317.0}, "uri": null}, "annotations": []}
null
c7df0093-309d-4f72-9a96-5db7424615be
2302.06555v2.pdf
picture
null
null
391
292
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/pictures/24", "parent": {"cref": "#/body"}, "children": [], "label": "picture", "prov": [{"page_no": 17, "bbox": {"l": 317.1781921386719, "t": 595.3639526367188, "r": 512.4075927734375, "b": 449.40948486328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 0]}], "captions": [], "references": [], "footnotes": [], "image": {"mimetype": "image/png", "dpi": 144, "size": {"width": 391.0, "height": 292.0}, "uri": null}, "annotations": []}
null
beeadb91-d612-4de4-aee0-43528f793782
2302.06555v2.pdf
text
CLIP-VIT-L-14
null
99
17
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/219", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 333.468994140625, "t": 444.474609375, "r": 383.1658935546875, "b": 436.07421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 13]}], "orig": "CLIP-VIT-L-14", "text": "CLIP-VIT-L-14"}
null
9a4cc1b1-57b7-4df3-9871-7b554b97b92b
2302.06555v2.pdf
text
CLIP-VIT-B-32
null
101
17
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/220", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 390.9438781738281, "t": 444.474609375, "r": 441.5317687988281, "b": 436.07421875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 13]}], "orig": "CLIP-VIT-B-32", "text": "CLIP-VIT-B-32"}
null
75fda726-3abb-4770-9039-0c4f9ebb8fef
2302.06555v2.pdf
text
directly compare the results from CLIP with pure vision models, as the pretraining datasets they utilize differ significantly in scale and scope. Detailed results are presented in Figure 9 and Figure 10.
null
442
103
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/221", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 306.244873046875, "t": 318.658203125, "r": 527.3578491210938, "b": 267.01153564453125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 203]}], "orig": "directly compare the results from CLIP with pure vision models, as the pretraining datasets they utilize differ significantly in scale and scope. Detailed results are presented in Figure 9 and Figure 10.", "text": "directly compare the results from CLIP with pure vision models, as the pretraining datasets they utilize differ significantly in scale and scope. Detailed results are presented in Figure 9 and Figure 10."}
null
cd22f130-095d-4521-b9ee-1c2cf2480ae2
2302.06555v2.pdf
text
POS impact on CLIP and OPT. In Table 9, we report the POS impact on OPT$_{30B}$ and two best CLIP vision encoders in our experiments.
null
438
76
72
image/png
18
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/222", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 17, "bbox": {"l": 306.3138427734375, "t": 257.1636962890625, "r": 525.539794921875, "b": 219.1971435546875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 133]}], "orig": "POS impact on CLIP and OPT. In Table 9, we report the POS impact on OPT$_{30B}$ and two best CLIP vision encoders in our experiments.", "text": "POS impact on CLIP and OPT. In Table 9, we report the POS impact on OPT$_{30B}$ and two best CLIP vision encoders in our experiments."}
null
c935ad4a-c949-4ced-9a58-c8bc3e06be98
2304.09355v5.pdf
page_header
arXiv:2304.09355v5 [cs.LG] 21 Nov 2023
null
39
704
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/0", "parent": {"cref": "#/body"}, "children": [], "label": "page_header", "prov": [{"page_no": 1, "bbox": {"l": 17.048152923583984, "t": 584.1799926757812, "r": 36.339778900146484, "b": 231.9525146484375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 38]}], "orig": "arXiv:2304.09355v5 [cs.LG] 21 Nov 2023", "text": "arXiv:2304.09355v5 [cs.LG] 21 Nov 2023"}
null
bc299c52-9dcc-44a9-a0cd-b5b9cf7b9aa2
2304.09355v5.pdf
page_header
To Compress or Not to Compress
null
349
20
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/1", "parent": {"cref": "#/body"}, "children": [], "label": "page_header", "prov": [{"page_no": 1, "bbox": {"l": 216.67510986328125, "t": 752.1227416992188, "r": 391.006103515625, "b": 741.8831176757812, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 30]}], "orig": "To Compress or Not to Compress", "text": "To Compress or Not to Compress"}
null
df371f1d-6e5c-4bc6-872b-4e850a0ddbc7
2304.09355v5.pdf
section_header
To Compress or Not to Compress - Self-Supervised Learning and Information Theory: A Review
null
863
65
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/2", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 90.24012756347656, "t": 695.8875122070312, "r": 521.2555541992188, "b": 663.5675659179688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 90]}], "orig": "To Compress or Not to Compress - Self-Supervised Learning and Information Theory: A Review", "text": "To Compress or Not to Compress - Self-Supervised Learning and Information Theory: A Review", "level": 1}
null
eba286c6-7443-4769-8fde-8ba8d8f35061
2304.09355v5.pdf
text
Ravid Shwartz-Ziv New York University
null
398
22
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/3", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 89.51315307617188, "t": 647.0247802734375, "r": 288.3852233886719, "b": 635.8550415039062, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 37]}], "orig": "Ravid Shwartz-Ziv New York University", "text": "Ravid Shwartz-Ziv New York University"}
null
2e954c16-4aa1-458c-9973-538469ab5a27
2304.09355v5.pdf
text
null
272
18
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/4", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 386.21044921875, "t": 645.8736572265625, "r": 521.9979858398438, "b": 636.7821655273438, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 25]}], "orig": "[email protected]", "text": "[email protected]"}
null
62d1143f-715a-46b1-a64f-8c03907955b4
2304.09355v5.pdf
text
Yann LeCun New York University & Meta AI - FAIR
null
503
22
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/5", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 88.8544921875, "t": 633.96142578125, "r": 340.334228515625, "b": 623.0888671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 47]}], "orig": "Yann LeCun New York University & Meta AI - FAIR", "text": "Yann LeCun New York University & Meta AI - FAIR"}
null
7a4c7bae-d7a2-42ac-b46a-7a68dcc5056e
2304.09355v5.pdf
section_header
Abstract
null
103
23
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/6", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 279.8447265625, "t": 585.865234375, "r": 331.7017517089844, "b": 574.3516845703125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 8]}], "orig": "Abstract", "text": "Abstract", "level": 1}
null
d81777c5-2a34-4d37-af6c-535e121b0595
2304.09355v5.pdf
text
Deep neural networks excel in supervised learning tasks but are constrained by the need for extensive labeled data. Self-supervised learning emerges as a promising alternative, allowing models to learn without explicit labels. Information theory, and notably the information bottleneck principle, has been pivotal in shaping deep neural networks. This principle focuses on optimizing the trade-off between compression and preserving relevant information, providing a foundation for efficient network design in supervised contexts. However, its precise role and adaptation in self-supervised learning remain unclear. In this work, we scrutinize various self-supervised learning approaches from an informationtheoretic perspective, introducing a unified framework that encapsulates the self-supervised information-theoretic learning problem . We weave together existing research into a cohesive narrative, delve into contemporary self-supervised methodologies, and spotlight potential research avenues and inherent challenges. Additionally, we discuss the empirical evaluation of information-theoretic quantities and their estimation methods. Overall, this paper furnishes an exhaustive review of the intersection of information theory, self-supervised learning, and deep neural networks.
null
790
354
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/7", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 108.97402954101562, "t": 559.693359375, "r": 504.0065612792969, "b": 382.31695556640625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1286]}], "orig": "Deep neural networks excel in supervised learning tasks but are constrained by the need for extensive labeled data. Self-supervised learning emerges as a promising alternative, allowing models to learn without explicit labels. Information theory, and notably the information bottleneck principle, has been pivotal in shaping deep neural networks. This principle focuses on optimizing the trade-off between compression and preserving relevant information, providing a foundation for efficient network design in supervised contexts. However, its precise role and adaptation in self-supervised learning remain unclear. In this work, we scrutinize various self-supervised learning approaches from an informationtheoretic perspective, introducing a unified framework that encapsulates the self-supervised information-theoretic learning problem . We weave together existing research into a cohesive narrative, delve into contemporary self-supervised methodologies, and spotlight potential research avenues and inherent challenges. Additionally, we discuss the empirical evaluation of information-theoretic quantities and their estimation methods. Overall, this paper furnishes an exhaustive review of the intersection of information theory, self-supervised learning, and deep neural networks.", "text": "Deep neural networks excel in supervised learning tasks but are constrained by the need for extensive labeled data. Self-supervised learning emerges as a promising alternative, allowing models to learn without explicit labels. Information theory, and notably the information bottleneck principle, has been pivotal in shaping deep neural networks. This principle focuses on optimizing the trade-off between compression and preserving relevant information, providing a foundation for efficient network design in supervised contexts. However, its precise role and adaptation in self-supervised learning remain unclear. In this work, we scrutinize various self-supervised learning approaches from an informationtheoretic perspective, introducing a unified framework that encapsulates the self-supervised information-theoretic learning problem . We weave together existing research into a cohesive narrative, delve into contemporary self-supervised methodologies, and spotlight potential research avenues and inherent challenges. Additionally, we discuss the empirical evaluation of information-theoretic quantities and their estimation methods. Overall, this paper furnishes an exhaustive review of the intersection of information theory, self-supervised learning, and deep neural networks."}
null
afc0f5f7-7730-4ed8-8aa9-157448dee7ce
2304.09355v5.pdf
text
Keywords: Self-Supervised Learning, Information Theory, Representation Learning
null
750
21
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/8", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 109.1109848022461, "t": 370.41619873046875, "r": 483.8846740722656, "b": 359.9473876953125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 79]}], "orig": "Keywords: Self-Supervised Learning, Information Theory, Representation Learning", "text": "Keywords: Self-Supervised Learning, Information Theory, Representation Learning"}
null
63b72cfa-0b0f-45f8-83d7-c57f5f877641
2304.09355v5.pdf
section_header
1. Introduction
null
181
22
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/9", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 1, "bbox": {"l": 89.9918441772461, "t": 331.13165283203125, "r": 180.34707641601562, "b": 319.9496765136719, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 15]}], "orig": "1. Introduction", "text": "1. Introduction", "level": 1}
null
92ebe21c-6395-4e5a-90a1-95b4bbcc724e
2304.09355v5.pdf
text
Deep neural networks (DNNs) have revolutionized fields such as computer vision, natural language processing, and speech recognition due to their remarkable performance in supervised learning tasks (Alam et al., 2020; He et al., 2015; LeCun et al., 2015). However, the success of DNNs is often limited by the need for vast amounts of labeled data, which can be both time-consuming and expensive to acquire. Self-supervised learning (SSL) emerges as a promising alternative, enabling models to learn from data without explicit labels by leveraging the underlying structure and relationships within the data itself.
null
870
186
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/10", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 88.8239974975586, "t": 309.8063659667969, "r": 523.817626953125, "b": 217.18157958984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 612]}], "orig": "Deep neural networks (DNNs) have revolutionized fields such as computer vision, natural language processing, and speech recognition due to their remarkable performance in supervised learning tasks (Alam et al., 2020; He et al., 2015; LeCun et al., 2015). However, the success of DNNs is often limited by the need for vast amounts of labeled data, which can be both time-consuming and expensive to acquire. Self-supervised learning (SSL) emerges as a promising alternative, enabling models to learn from data without explicit labels by leveraging the underlying structure and relationships within the data itself.", "text": "Deep neural networks (DNNs) have revolutionized fields such as computer vision, natural language processing, and speech recognition due to their remarkable performance in supervised learning tasks (Alam et al., 2020; He et al., 2015; LeCun et al., 2015). However, the success of DNNs is often limited by the need for vast amounts of labeled data, which can be both time-consuming and expensive to acquire. Self-supervised learning (SSL) emerges as a promising alternative, enabling models to learn from data without explicit labels by leveraging the underlying structure and relationships within the data itself."}
null
2c78e44b-77e8-4f29-a322-80c6271b75f8
2304.09355v5.pdf
text
Recent advances in SSL have been driven by joint embedding architectures, such as Siamese Nets (Bromley et al., 1993), DrLIM (Chopra et al., 2005; Hadsell et al., 2006), and SimCLR (Chen et al., 2020a). These approaches define a loss function that encourages representations of different versions of the same image to be similar while pushing representations of distinct images apart. After optimizing the surrogate objective, the pre-trained model can be employed as a feature extractor, with the learned features serving as inputs for downstream supervised tasks like image classification, object detection, instance segmentation, or pose estimation (Caron et al., 2021; Chen et al., 2020a; Misra and van der Maaten, 2020; ShwartzZiv et al., 2022b). Although SSL methods have shown promising results in practice, the
null
871
239
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/11", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 1, "bbox": {"l": 88.7239990234375, "t": 205.8809814453125, "r": 523.8172607421875, "b": 86.47491455078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 818]}], "orig": "Recent advances in SSL have been driven by joint embedding architectures, such as Siamese Nets (Bromley et al., 1993), DrLIM (Chopra et al., 2005; Hadsell et al., 2006), and SimCLR (Chen et al., 2020a). These approaches define a loss function that encourages representations of different versions of the same image to be similar while pushing representations of distinct images apart. After optimizing the surrogate objective, the pre-trained model can be employed as a feature extractor, with the learned features serving as inputs for downstream supervised tasks like image classification, object detection, instance segmentation, or pose estimation (Caron et al., 2021; Chen et al., 2020a; Misra and van der Maaten, 2020; ShwartzZiv et al., 2022b). Although SSL methods have shown promising results in practice, the", "text": "Recent advances in SSL have been driven by joint embedding architectures, such as Siamese Nets (Bromley et al., 1993), DrLIM (Chopra et al., 2005; Hadsell et al., 2006), and SimCLR (Chen et al., 2020a). These approaches define a loss function that encourages representations of different versions of the same image to be similar while pushing representations of distinct images apart. After optimizing the surrogate objective, the pre-trained model can be employed as a feature extractor, with the learned features serving as inputs for downstream supervised tasks like image classification, object detection, instance segmentation, or pose estimation (Caron et al., 2021; Chen et al., 2020a; Misra and van der Maaten, 2020; ShwartzZiv et al., 2022b). Although SSL methods have shown promising results in practice, the"}
null
6a615eda-22e3-46ec-b111-538418f40309
2304.09355v5.pdf
page_footer
1
null
10
19
72
image/png
2
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/12", "parent": {"cref": "#/body"}, "children": [], "label": "page_footer", "prov": [{"page_no": 1, "bbox": {"l": 303.3566589355469, "t": 66.76409912109375, "r": 308.49029541015625, "b": 57.317256927490234, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1]}], "orig": "1", "text": "1"}
null
5f6d419e-d3e6-4ddc-a214-f9c50540b26d
2304.09355v5.pdf
text
theoretical underpinnings behind their effectiveness remain an open question (Arora et al., 2019; Lee et al., 2021a).
null
869
49
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/13", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 89.18399810791016, "t": 698.4678344726562, "r": 523.51904296875, "b": 674.0634155273438, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 117]}], "orig": "theoretical underpinnings behind their effectiveness remain an open question (Arora et al., 2019; Lee et al., 2021a).", "text": "theoretical underpinnings behind their effectiveness remain an open question (Arora et al., 2019; Lee et al., 2021a)."}
null
fdf81ebd-d0e6-406b-bb19-7e29f306c781
2304.09355v5.pdf
text
Information theory has played a crucial role in understanding and optimizing deep neural networks, from practical applications like the variational information bottleneck (Alemi et al., 2016) to theoretical investigations of generalization bounds induced by mutual information (Steinke and Zakynthinou, 2020; Xu and Raginsky, 2017). Building upon these foundations, several researchers have attempted to enhance self-supervised and semisupervised learning algorithms using information-theoretic principles, such as the Mutual Information Neural Estimator (MINE) (Belghazi et al., 2018b) combined with the information maximization (InfoMax) principle (Linsker, 1988). However, the plethora of objective functions, contradicting assumptions, and various estimation techniques in the literature can make it challenging to grasp the underlying principles and their implications.
null
870
267
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/14", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 89.1538314819336, "t": 662.443603515625, "r": 523.8245849609375, "b": 529.1944580078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 874]}], "orig": "Information theory has played a crucial role in understanding and optimizing deep neural networks, from practical applications like the variational information bottleneck (Alemi et al., 2016) to theoretical investigations of generalization bounds induced by mutual information (Steinke and Zakynthinou, 2020; Xu and Raginsky, 2017). Building upon these foundations, several researchers have attempted to enhance self-supervised and semisupervised learning algorithms using information-theoretic principles, such as the Mutual Information Neural Estimator (MINE) (Belghazi et al., 2018b) combined with the information maximization (InfoMax) principle (Linsker, 1988). However, the plethora of objective functions, contradicting assumptions, and various estimation techniques in the literature can make it challenging to grasp the underlying principles and their implications.", "text": "Information theory has played a crucial role in understanding and optimizing deep neural networks, from practical applications like the variational information bottleneck (Alemi et al., 2016) to theoretical investigations of generalization bounds induced by mutual information (Steinke and Zakynthinou, 2020; Xu and Raginsky, 2017). Building upon these foundations, several researchers have attempted to enhance self-supervised and semisupervised learning algorithms using information-theoretic principles, such as the Mutual Information Neural Estimator (MINE) (Belghazi et al., 2018b) combined with the information maximization (InfoMax) principle (Linsker, 1988). However, the plethora of objective functions, contradicting assumptions, and various estimation techniques in the literature can make it challenging to grasp the underlying principles and their implications."}
null
d4d508ee-f428-48f5-b765-531dbf54758e
2304.09355v5.pdf
text
In this paper, we aim to achieve two objectives. First, we propose a unified framework that synthesizes existing research on self-supervised and semi-supervised learning from an information-theoretic standpoint. This framework allows us to present and compare current methods, analyze their assumptions and difficulties, and discuss the optimal representation for neural networks in general and self-supervised networks in particular. Second, we explore different methods and estimators for optimizing information-theoretic quantities in deep neural networks and investigate how recent models optimize various theoretical-information terms.
null
866
210
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/15", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 89.29786682128906, "t": 517.6422119140625, "r": 522.2979125976562, "b": 412.41864013671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 640]}], "orig": "In this paper, we aim to achieve two objectives. First, we propose a unified framework that synthesizes existing research on self-supervised and semi-supervised learning from an information-theoretic standpoint. This framework allows us to present and compare current methods, analyze their assumptions and difficulties, and discuss the optimal representation for neural networks in general and self-supervised networks in particular. Second, we explore different methods and estimators for optimizing information-theoretic quantities in deep neural networks and investigate how recent models optimize various theoretical-information terms.", "text": "In this paper, we aim to achieve two objectives. First, we propose a unified framework that synthesizes existing research on self-supervised and semi-supervised learning from an information-theoretic standpoint. This framework allows us to present and compare current methods, analyze their assumptions and difficulties, and discuss the optimal representation for neural networks in general and self-supervised networks in particular. Second, we explore different methods and estimators for optimizing information-theoretic quantities in deep neural networks and investigate how recent models optimize various theoretical-information terms."}
null
1366b73a-52ce-4247-b1d5-c168269b2acf
2304.09355v5.pdf
text
By reviewing the literature on various aspects of information-theoretic learning, we provide a comprehensive understanding of the interplay between information theory, self-supervised learning, and deep neural networks. We discuss the application of the information bottleneck principle (Tishby et al., 1999a), connections between information theory and generalization, and recent information-theoretic learning algorithms. Furthermore, we examine how the information-theoretic perspective can offer insights into the design of better self-supervised learning algorithms and the potential benefits of using information theory in SSL across a wide range of applications.
null
869
212
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/16", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 89.21715545654297, "t": 400.3964538574219, "r": 523.5151977539062, "b": 294.5848083496094, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 669]}], "orig": "By reviewing the literature on various aspects of information-theoretic learning, we provide a comprehensive understanding of the interplay between information theory, self-supervised learning, and deep neural networks. We discuss the application of the information bottleneck principle (Tishby et al., 1999a), connections between information theory and generalization, and recent information-theoretic learning algorithms. Furthermore, we examine how the information-theoretic perspective can offer insights into the design of better self-supervised learning algorithms and the potential benefits of using information theory in SSL across a wide range of applications.", "text": "By reviewing the literature on various aspects of information-theoretic learning, we provide a comprehensive understanding of the interplay between information theory, self-supervised learning, and deep neural networks. We discuss the application of the information bottleneck principle (Tishby et al., 1999a), connections between information theory and generalization, and recent information-theoretic learning algorithms. Furthermore, we examine how the information-theoretic perspective can offer insights into the design of better self-supervised learning algorithms and the potential benefits of using information theory in SSL across a wide range of applications."}
null
c401e2d1-41de-43ac-8745-13dee3d04d6d
2304.09355v5.pdf
text
In addition to the main structure of the paper, we dedicate a section to the challenges and opportunities in extending the information-theoretic perspective to other learning paradigms, such as energy-based models. We highlight the potential advantages of incorporating these extensions into self-supervised learning algorithms and discuss the technical and conceptual challenges that must be addressed.
null
869
130
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/17", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 89.00598907470703, "t": 282.90179443359375, "r": 523.5209350585938, "b": 217.75103759765625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 403]}], "orig": "In addition to the main structure of the paper, we dedicate a section to the challenges and opportunities in extending the information-theoretic perspective to other learning paradigms, such as energy-based models. We highlight the potential advantages of incorporating these extensions into self-supervised learning algorithms and discuss the technical and conceptual challenges that must be addressed.", "text": "In addition to the main structure of the paper, we dedicate a section to the challenges and opportunities in extending the information-theoretic perspective to other learning paradigms, such as energy-based models. We highlight the potential advantages of incorporating these extensions into self-supervised learning algorithms and discuss the technical and conceptual challenges that must be addressed."}
null
4566090b-b276-42db-805e-e8e0bec215d5
2304.09355v5.pdf
text
The structure of the paper is as follows. Section 2 introduces the key concepts in supervised, semi-supervised, self-supervised learning, information theory, and representation learning. Section 3 presents a unified framework for multiview learning based on information theory. We first discuss what an optimal representation is and why compression is beneficial for learning. Next, we explore optimal representation in single-view supervised learning models and how they can be extended to unsupervised, semi-supervised, and multiview contexts. The focus then shifts to self-supervised learning, where the optimal representation remains an open question. Using the unified framework, we compare recent self-supervised algorithms and discuss their differences. We analyze the assumptions behind these models, their effects
null
870
237
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/18", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 2, "bbox": {"l": 88.85343170166016, "t": 205.67364501953125, "r": 524.1221313476562, "b": 86.9251708984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 822]}], "orig": "The structure of the paper is as follows. Section 2 introduces the key concepts in supervised, semi-supervised, self-supervised learning, information theory, and representation learning. Section 3 presents a unified framework for multiview learning based on information theory. We first discuss what an optimal representation is and why compression is beneficial for learning. Next, we explore optimal representation in single-view supervised learning models and how they can be extended to unsupervised, semi-supervised, and multiview contexts. The focus then shifts to self-supervised learning, where the optimal representation remains an open question. Using the unified framework, we compare recent self-supervised algorithms and discuss their differences. We analyze the assumptions behind these models, their effects", "text": "The structure of the paper is as follows. Section 2 introduces the key concepts in supervised, semi-supervised, self-supervised learning, information theory, and representation learning. Section 3 presents a unified framework for multiview learning based on information theory. We first discuss what an optimal representation is and why compression is beneficial for learning. Next, we explore optimal representation in single-view supervised learning models and how they can be extended to unsupervised, semi-supervised, and multiview contexts. The focus then shifts to self-supervised learning, where the optimal representation remains an open question. Using the unified framework, we compare recent self-supervised algorithms and discuss their differences. We analyze the assumptions behind these models, their effects"}
null
221c6a02-90ac-4372-ab95-62dd62c819e5
2304.09355v5.pdf
page_footer
2
null
11
18
72
image/png
3
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/19", "parent": {"cref": "#/body"}, "children": [], "label": "page_footer", "prov": [{"page_no": 2, "bbox": {"l": 302.83428955078125, "t": 66.747802734375, "r": 308.49029541015625, "b": 57.317256927490234, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1]}], "orig": "2", "text": "2"}
null
50337d6f-7d1c-47c7-ad69-071869989e21
2304.09355v5.pdf
page_header
To Compress or Not to Compress
null
349
20
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/20", "parent": {"cref": "#/body"}, "children": [], "label": "page_header", "prov": [{"page_no": 3, "bbox": {"l": 216.72885131835938, "t": 752.1143798828125, "r": 390.9455871582031, "b": 741.8831176757812, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 30]}], "orig": "To Compress or Not to Compress", "text": "To Compress or Not to Compress"}
null
30f65453-77f8-4459-a58c-1f55a0d34b46
2304.09355v5.pdf
text
on the learned representation, and their varying perspectives on important information within the network.
null
866
48
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/21", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 89.13358306884766, "t": 698.26904296875, "r": 522.005615234375, "b": 674.5136108398438, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 106]}], "orig": "on the learned representation, and their varying perspectives on important information within the network.", "text": "on the learned representation, and their varying perspectives on important information within the network."}
null
2de25490-e7e8-4c00-82c0-295d0c861fe8
2304.09355v5.pdf
text
Section 5 addresses several technical challenges, discussing both theoretical and practical issues in estimating theoretical information terms. We present recent methods for estimating these quantities, including variational bounds and estimators. Section 6 concludes the paper by offering insights into potential future research directions at the intersection of information theory, self-supervised learning, and deep neural networks. Our aim is to inspire further research that leverages information theory to advance our understanding of self-supervised learning and to develop more efficient and effective models for a broad range of applications.
null
870
185
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/22", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 89.21458435058594, "t": 661.738525390625, "r": 524.1165161132812, "b": 569.0333862304688, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 651]}], "orig": "Section 5 addresses several technical challenges, discussing both theoretical and practical issues in estimating theoretical information terms. We present recent methods for estimating these quantities, including variational bounds and estimators. Section 6 concludes the paper by offering insights into potential future research directions at the intersection of information theory, self-supervised learning, and deep neural networks. Our aim is to inspire further research that leverages information theory to advance our understanding of self-supervised learning and to develop more efficient and effective models for a broad range of applications.", "text": "Section 5 addresses several technical challenges, discussing both theoretical and practical issues in estimating theoretical information terms. We present recent methods for estimating these quantities, including variational bounds and estimators. Section 6 concludes the paper by offering insights into potential future research directions at the intersection of information theory, self-supervised learning, and deep neural networks. Our aim is to inspire further research that leverages information theory to advance our understanding of self-supervised learning and to develop more efficient and effective models for a broad range of applications."}
null
d7caceb7-9c5d-48c1-b227-2681ec5f83fb
2304.09355v5.pdf
section_header
2. Background and Fundamental Concepts
null
504
24
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/23", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 3, "bbox": {"l": 89.69479370117188, "t": 540.44921875, "r": 341.64739990234375, "b": 528.4921875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 38]}], "orig": "2. Background and Fundamental Concepts", "text": "2. Background and Fundamental Concepts", "level": 1}
null
0d8605ab-dc14-456a-a575-9b50b0672e1d
2304.09355v5.pdf
section_header
2.1 Multiview Representation Learning
null
436
22
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/24", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 3, "bbox": {"l": 89.4501724243164, "t": 519.122802734375, "r": 307.7369384765625, "b": 508.2320556640625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 37]}], "orig": "2.1 Multiview Representation Learning", "text": "2.1 Multiview Representation Learning", "level": 1}
null
7d25a754-68f8-4f09-9fb3-e5296ca4f5c2
2304.09355v5.pdf
text
Multiview learning has gained increasing attention and great practical success by using complementary information from multiple features or modalities. The multiview learning paradigm divides the input variable into multiple views from which the target variable should be predicted (Zhao et al., 2017b). Using this paradigm, one can eliminate hypotheses that contradict predictions from other views and provide a natural semi-supervised and self-supervised learning setting. A multiview dataset consists of data captured from multiple sources, modalities, and forms but with similar high-level semantics (Yan et al., 2021). This mechanism was initially used for natural-world data, combining image, text, audio, and video measurements. For example, photos of objects are taken from various angles, and our supervised task is to identify the objects. Another example is identifying a person by analyzing the video stream as one view and the audio stream as the other.
null
867
293
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/25", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 89.10533142089844, "t": 499.7522888183594, "r": 522.4059448242188, "b": 353.42889404296875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 966]}], "orig": "Multiview learning has gained increasing attention and great practical success by using complementary information from multiple features or modalities. The multiview learning paradigm divides the input variable into multiple views from which the target variable should be predicted (Zhao et al., 2017b). Using this paradigm, one can eliminate hypotheses that contradict predictions from other views and provide a natural semi-supervised and self-supervised learning setting. A multiview dataset consists of data captured from multiple sources, modalities, and forms but with similar high-level semantics (Yan et al., 2021). This mechanism was initially used for natural-world data, combining image, text, audio, and video measurements. For example, photos of objects are taken from various angles, and our supervised task is to identify the objects. Another example is identifying a person by analyzing the video stream as one view and the audio stream as the other.", "text": "Multiview learning has gained increasing attention and great practical success by using complementary information from multiple features or modalities. The multiview learning paradigm divides the input variable into multiple views from which the target variable should be predicted (Zhao et al., 2017b). Using this paradigm, one can eliminate hypotheses that contradict predictions from other views and provide a natural semi-supervised and self-supervised learning setting. A multiview dataset consists of data captured from multiple sources, modalities, and forms but with similar high-level semantics (Yan et al., 2021). This mechanism was initially used for natural-world data, combining image, text, audio, and video measurements. For example, photos of objects are taken from various angles, and our supervised task is to identify the objects. Another example is identifying a person by analyzing the video stream as one view and the audio stream as the other."}
null
d09a4822-0422-492c-81f4-4af9b729dfb7
2304.09355v5.pdf
text
Although these views often provide different and complementary information about the same data, directly integrating them does not produce satisfactory results due to biases between multiple views (Yan et al., 2021). Thus, multiview representation learning involves identifying the underlying data structure and integrating the different views into a common feature space, resulting in high performance. In recent decades, multiview learning has been used for many machine learning tasks and influenced many algorithms, such as co-training mechanisms (Kumar and Daum' e, 2011), subspace learning methods (Xue et al., 2019), and multiple kernel learning (MKL) (Bach and Jordan, 2002). Li et al. (2018) proposed two categories for multiview representation learning: (i) multiview representation fusion, which combines different features from multiple views into a single compact representation, and (ii) alignment of multiview representation, which attempts to capture the relationships among multiple different views through feature alignment. In this case, a learned mapping function embeds the data of each view, and the representations are regularized to form a multiviewaligned space. In this research direction, an early study is the Canonical Correlation Analysis (CCA) (Hotelling, 1936) and its kernel extensions (Bach and Jordan, 2003; Hardoon et al., 2004; Sun, 2013). In addition to CCA, multiview representation learning has penetrated a variety of learning methods, such as dimensionality reduction (Sun et al., 2010), clustering analysis (Yan et al., 2015), multiview sparse coding (Cao et al., 2013; Jia et al., 2010; Liu et al., 2014), and multimodal topic learning (Pu et al., 2020). However, despite their
null
871
511
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/26", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 3, "bbox": {"l": 88.7239990234375, "t": 341.7623291015625, "r": 523.8228759765625, "b": 86.5010986328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1721]}], "orig": "Although these views often provide different and complementary information about the same data, directly integrating them does not produce satisfactory results due to biases between multiple views (Yan et al., 2021). Thus, multiview representation learning involves identifying the underlying data structure and integrating the different views into a common feature space, resulting in high performance. In recent decades, multiview learning has been used for many machine learning tasks and influenced many algorithms, such as co-training mechanisms (Kumar and Daum' e, 2011), subspace learning methods (Xue et al., 2019), and multiple kernel learning (MKL) (Bach and Jordan, 2002). Li et al. (2018) proposed two categories for multiview representation learning: (i) multiview representation fusion, which combines different features from multiple views into a single compact representation, and (ii) alignment of multiview representation, which attempts to capture the relationships among multiple different views through feature alignment. In this case, a learned mapping function embeds the data of each view, and the representations are regularized to form a multiviewaligned space. In this research direction, an early study is the Canonical Correlation Analysis (CCA) (Hotelling, 1936) and its kernel extensions (Bach and Jordan, 2003; Hardoon et al., 2004; Sun, 2013). In addition to CCA, multiview representation learning has penetrated a variety of learning methods, such as dimensionality reduction (Sun et al., 2010), clustering analysis (Yan et al., 2015), multiview sparse coding (Cao et al., 2013; Jia et al., 2010; Liu et al., 2014), and multimodal topic learning (Pu et al., 2020). However, despite their", "text": "Although these views often provide different and complementary information about the same data, directly integrating them does not produce satisfactory results due to biases between multiple views (Yan et al., 2021). Thus, multiview representation learning involves identifying the underlying data structure and integrating the different views into a common feature space, resulting in high performance. In recent decades, multiview learning has been used for many machine learning tasks and influenced many algorithms, such as co-training mechanisms (Kumar and Daum' e, 2011), subspace learning methods (Xue et al., 2019), and multiple kernel learning (MKL) (Bach and Jordan, 2002). Li et al. (2018) proposed two categories for multiview representation learning: (i) multiview representation fusion, which combines different features from multiple views into a single compact representation, and (ii) alignment of multiview representation, which attempts to capture the relationships among multiple different views through feature alignment. In this case, a learned mapping function embeds the data of each view, and the representations are regularized to form a multiviewaligned space. In this research direction, an early study is the Canonical Correlation Analysis (CCA) (Hotelling, 1936) and its kernel extensions (Bach and Jordan, 2003; Hardoon et al., 2004; Sun, 2013). In addition to CCA, multiview representation learning has penetrated a variety of learning methods, such as dimensionality reduction (Sun et al., 2010), clustering analysis (Yan et al., 2015), multiview sparse coding (Cao et al., 2013; Jia et al., 2010; Liu et al., 2014), and multimodal topic learning (Pu et al., 2020). However, despite their"}
null
09c78988-7391-4c7b-87c7-3cbac278771a
2304.09355v5.pdf
page_footer
3
null
11
18
72
image/png
4
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/27", "parent": {"cref": "#/body"}, "children": [], "label": "page_footer", "prov": [{"page_no": 3, "bbox": {"l": 302.9095764160156, "t": 66.6334228515625, "r": 308.49029541015625, "b": 57.317256927490234, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1]}], "orig": "3", "text": "3"}
null
783bb0ae-863f-4690-914e-1f0190025223
2304.09355v5.pdf
text
promising results, these methods use handcrafted features and linear embedding functions, which cannot capture the nonlinear properties of multiview data.
null
869
48
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/28", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 89.12200164794922, "t": 698.1217041015625, "r": 523.5210571289062, "b": 674.1952514648438, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 154]}], "orig": "promising results, these methods use handcrafted features and linear embedding functions, which cannot capture the nonlinear properties of multiview data.", "text": "promising results, these methods use handcrafted features and linear embedding functions, which cannot capture the nonlinear properties of multiview data."}
null
1250b077-0ce4-4669-af11-785108f29afd
2304.09355v5.pdf
text
The emergence of deep learning has provided a powerful way to learn complex, nonlinear, and hierarchical representations of data. By incorporating multiple hierarchical layers, deep learning algorithms can learn complex, subtle, and abstract representations of target data. The success of deep learning in various application domains has led to a growing interest in deep multiview methods, which have shown promising results. Examples of these methods include deep multiview canonical correlation analysis (Andrew et al., 2013) as an extension of CCA, multiview clustering via deep matrix factorization (Zhao et al., 2017a), and the deep multiview spectral network (Huang et al., 2019). Moreover, deep architectures have been employed to generate effective representations in methods such as multiview convolutional neural networks (Liu et al., 2021a), multimodal deep Boltzmann machines (Srivastava and Salakhutdinov, 2014), multimodal deep autoencoders (Ngiam et al., 2011; Wang et al., 2015), and multimodal recurrent neural networks (Donahue et al., 2015; Karpathy and Fei-Fei, 2015; Mao et al., 2014).
null
870
348
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/29", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 88.89997863769531, "t": 661.9761962890625, "r": 524.1187133789062, "b": 487.9737548828125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1107]}], "orig": "The emergence of deep learning has provided a powerful way to learn complex, nonlinear, and hierarchical representations of data. By incorporating multiple hierarchical layers, deep learning algorithms can learn complex, subtle, and abstract representations of target data. The success of deep learning in various application domains has led to a growing interest in deep multiview methods, which have shown promising results. Examples of these methods include deep multiview canonical correlation analysis (Andrew et al., 2013) as an extension of CCA, multiview clustering via deep matrix factorization (Zhao et al., 2017a), and the deep multiview spectral network (Huang et al., 2019). Moreover, deep architectures have been employed to generate effective representations in methods such as multiview convolutional neural networks (Liu et al., 2021a), multimodal deep Boltzmann machines (Srivastava and Salakhutdinov, 2014), multimodal deep autoencoders (Ngiam et al., 2011; Wang et al., 2015), and multimodal recurrent neural networks (Donahue et al., 2015; Karpathy and Fei-Fei, 2015; Mao et al., 2014).", "text": "The emergence of deep learning has provided a powerful way to learn complex, nonlinear, and hierarchical representations of data. By incorporating multiple hierarchical layers, deep learning algorithms can learn complex, subtle, and abstract representations of target data. The success of deep learning in various application domains has led to a growing interest in deep multiview methods, which have shown promising results. Examples of these methods include deep multiview canonical correlation analysis (Andrew et al., 2013) as an extension of CCA, multiview clustering via deep matrix factorization (Zhao et al., 2017a), and the deep multiview spectral network (Huang et al., 2019). Moreover, deep architectures have been employed to generate effective representations in methods such as multiview convolutional neural networks (Liu et al., 2021a), multimodal deep Boltzmann machines (Srivastava and Salakhutdinov, 2014), multimodal deep autoencoders (Ngiam et al., 2011; Wang et al., 2015), and multimodal recurrent neural networks (Donahue et al., 2015; Karpathy and Fei-Fei, 2015; Mao et al., 2014)."}
null
c1fddcd9-76f5-4f62-81dc-7e6c1581e07a
2304.09355v5.pdf
section_header
2.2 Self-Supervised Learning
null
319
22
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/30", "parent": {"cref": "#/body"}, "children": [], "label": "section_header", "prov": [{"page_no": 4, "bbox": {"l": 89.47816467285156, "t": 471.16668701171875, "r": 249.0968780517578, "b": 459.9459533691406, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 28]}], "orig": "2.2 Self-Supervised Learning", "text": "2.2 Self-Supervised Learning", "level": 1}
null
bb66a9da-fe70-4291-a614-6030cded3918
2304.09355v5.pdf
text
Self-supervised learning (SSL) is a powerful technique that leverages unlabeled data to learn useful representations. In contrast to supervised learning, which relies on labeled data, SSL employs self-defined signals to establish a proxy objective between the input and the signal. The model is initially trained using this proxy objective and subsequently fine-tuned on the target task. Self-supervised signals, derived from the inherent co-occurrence relationships in the data, serve as self-supervision. Various such signals have been used to learn representations, including generative and joint embedding architectures (Bachman et al., 2019; Bar et al., 2022; Chen et al., 2020a,b).
null
866
212
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/31", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 89.25768280029297, "t": 451.8852233886719, "r": 522.4007568359375, "b": 345.9452209472656, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 687]}], "orig": "Self-supervised learning (SSL) is a powerful technique that leverages unlabeled data to learn useful representations. In contrast to supervised learning, which relies on labeled data, SSL employs self-defined signals to establish a proxy objective between the input and the signal. The model is initially trained using this proxy objective and subsequently fine-tuned on the target task. Self-supervised signals, derived from the inherent co-occurrence relationships in the data, serve as self-supervision. Various such signals have been used to learn representations, including generative and joint embedding architectures (Bachman et al., 2019; Bar et al., 2022; Chen et al., 2020a,b).", "text": "Self-supervised learning (SSL) is a powerful technique that leverages unlabeled data to learn useful representations. In contrast to supervised learning, which relies on labeled data, SSL employs self-defined signals to establish a proxy objective between the input and the signal. The model is initially trained using this proxy objective and subsequently fine-tuned on the target task. Self-supervised signals, derived from the inherent co-occurrence relationships in the data, serve as self-supervision. Various such signals have been used to learn representations, including generative and joint embedding architectures (Bachman et al., 2019; Bar et al., 2022; Chen et al., 2020a,b)."}
null
b42278f4-fcb7-4918-939f-6b1e82c76993
2304.09355v5.pdf
text
Two main categories of SSL architectures exist: (1) generative architectures based on reconstruction or prediction and (2) joint embedding architectures (Liu et al., 2021b). Both architecture classes can be trained using either contrastive or non-contrastive methods.
null
866
76
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/32", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 89.14336395263672, "t": 333.6808776855469, "r": 521.999755859375, "b": 295.71673583984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 267]}], "orig": "Two main categories of SSL architectures exist: (1) generative architectures based on reconstruction or prediction and (2) joint embedding architectures (Liu et al., 2021b). Both architecture classes can be trained using either contrastive or non-contrastive methods.", "text": "Two main categories of SSL architectures exist: (1) generative architectures based on reconstruction or prediction and (2) joint embedding architectures (Liu et al., 2021b). Both architecture classes can be trained using either contrastive or non-contrastive methods."}
null
ccd2a021-a9d8-4efe-95bd-3fe4c1287b1d
2304.09355v5.pdf
text
We begin by discussing these two main types of architectures:
null
592
22
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/33", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 4, "bbox": {"l": 88.83647918701172, "t": 283.6369934082031, "r": 385.017333984375, "b": 272.6397705078125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 61]}], "orig": "We begin by discussing these two main types of architectures:", "text": "We begin by discussing these two main types of architectures:"}
null
cb65dc35-de76-4f0d-8fd0-36211e76bfeb
2304.09355v5.pdf
list_item
1. Generative Architecture: Generative architectures employ an objective function that measures the divergence between input data and predicted reconstructions, such as squared error. The architecture reconstructs data from a latent variable or a corrupted version, potentially with a latent variable's assistance. Notable examples of generative architectures include auto-encoders, sparse coding, sparse auto-encoders, and variational auto-encoders (Kingma and Welling, 2013; Lee et al., 2006; Ng et al., 2011). As the reconstruction task lacks a single correct answer, most generative architectures utilize a latent variable, which, when varied, generates multiple reconstructions. The latent variable's information content requires regularization to ensure the system reconstructs regions of high data density while avoiding a collapse by reconstructing the entire space. PCA regularizes the latent variable by limiting its dimensions, while sparse coding and sparse auto-encoders restrict the number of non-zero components. Variational auto-encoders regularize the latent variable by rendering it stochastic
null
842
347
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/34", "parent": {"cref": "#/groups/0"}, "children": [], "label": "list_item", "prov": [{"page_no": 4, "bbox": {"l": 103.18352508544922, "t": 260.2718505859375, "r": 524.1220092773438, "b": 86.83660888671875, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1111]}], "orig": "1. Generative Architecture: Generative architectures employ an objective function that measures the divergence between input data and predicted reconstructions, such as squared error. The architecture reconstructs data from a latent variable or a corrupted version, potentially with a latent variable's assistance. Notable examples of generative architectures include auto-encoders, sparse coding, sparse auto-encoders, and variational auto-encoders (Kingma and Welling, 2013; Lee et al., 2006; Ng et al., 2011). As the reconstruction task lacks a single correct answer, most generative architectures utilize a latent variable, which, when varied, generates multiple reconstructions. The latent variable's information content requires regularization to ensure the system reconstructs regions of high data density while avoiding a collapse by reconstructing the entire space. PCA regularizes the latent variable by limiting its dimensions, while sparse coding and sparse auto-encoders restrict the number of non-zero components. Variational auto-encoders regularize the latent variable by rendering it stochastic", "text": "1. Generative Architecture: Generative architectures employ an objective function that measures the divergence between input data and predicted reconstructions, such as squared error. The architecture reconstructs data from a latent variable or a corrupted version, potentially with a latent variable's assistance. Notable examples of generative architectures include auto-encoders, sparse coding, sparse auto-encoders, and variational auto-encoders (Kingma and Welling, 2013; Lee et al., 2006; Ng et al., 2011). As the reconstruction task lacks a single correct answer, most generative architectures utilize a latent variable, which, when varied, generates multiple reconstructions. The latent variable's information content requires regularization to ensure the system reconstructs regions of high data density while avoiding a collapse by reconstructing the entire space. PCA regularizes the latent variable by limiting its dimensions, while sparse coding and sparse auto-encoders restrict the number of non-zero components. Variational auto-encoders regularize the latent variable by rendering it stochastic", "enumerated": false, "marker": "-"}
null
a00b5fb3-828f-4e60-9494-2b00c8e158ae
2304.09355v5.pdf
page_footer
4
null
11
19
72
image/png
5
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/35", "parent": {"cref": "#/body"}, "children": [], "label": "page_footer", "prov": [{"page_no": 4, "bbox": {"l": 302.75335693359375, "t": 66.92059326171875, "r": 308.49029541015625, "b": 57.317256927490234, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 1]}], "orig": "4", "text": "4"}
null
90d5d228-a832-4990-a7f7-1c32b6abaf9c
2304.09355v5.pdf
page_header
To Compress or Not to Compress
null
349
20
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/36", "parent": {"cref": "#/body"}, "children": [], "label": "page_header", "prov": [{"page_no": 5, "bbox": {"l": 216.6831512451172, "t": 752.0804443359375, "r": 390.9455871582031, "b": 741.8831176757812, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 30]}], "orig": "To Compress or Not to Compress", "text": "To Compress or Not to Compress"}
null
c938ef65-defa-4054-8562-9cfbfbb2863a
2304.09355v5.pdf
text
and maximizing the entropy of the distribution relative to a prior. Vector quantized variational auto-encoders (VQ-VAE) employ binary stochastic variables to achieve similar results (Van Den Oord et al., 2017).
null
811
76
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/37", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 116.46167755126953, "t": 698.3206787109375, "r": 522.00439453125, "b": 660.656494140625, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 210]}], "orig": "and maximizing the entropy of the distribution relative to a prior. Vector quantized variational auto-encoders (VQ-VAE) employ binary stochastic variables to achieve similar results (Van Den Oord et al., 2017).", "text": "and maximizing the entropy of the distribution relative to a prior. Vector quantized variational auto-encoders (VQ-VAE) employ binary stochastic variables to achieve similar results (Van Den Oord et al., 2017)."}
null
b9dec7a5-1a21-4a23-b64c-a90bc9949c4f
2304.09355v5.pdf
list_item
2. Joint Embedding Architectures (JEA): These architectures process multiple views of an input signal through encoders, producing representations of the views. The system is trained to ensure that these representations are both informative and mutually predictable. Examples include Siamese networks, where two identical encoders share weights (Chen et al., 2020a; Chen and He, 2021; Grill et al., 2020; He et al., 2020), and methods permitting encoders to differ (Bardes et al., 2021). A primary challenge with JEA is preventing informational collapse, in which the representations contain minimal information about the inputs, thereby facilitating their mutual prediction. JEA's advantage lies in the encoders' ability to eliminate noisy, unpredictable, or irrelevant information from the input within the representation space.
null
843
267
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/38", "parent": {"cref": "#/groups/1"}, "children": [], "label": "list_item", "prov": [{"page_no": 5, "bbox": {"l": 102.54643249511719, "t": 645.849853515625, "r": 524.1174926757812, "b": 512.730224609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 829]}], "orig": "2. Joint Embedding Architectures (JEA): These architectures process multiple views of an input signal through encoders, producing representations of the views. The system is trained to ensure that these representations are both informative and mutually predictable. Examples include Siamese networks, where two identical encoders share weights (Chen et al., 2020a; Chen and He, 2021; Grill et al., 2020; He et al., 2020), and methods permitting encoders to differ (Bardes et al., 2021). A primary challenge with JEA is preventing informational collapse, in which the representations contain minimal information about the inputs, thereby facilitating their mutual prediction. JEA's advantage lies in the encoders' ability to eliminate noisy, unpredictable, or irrelevant information from the input within the representation space.", "text": "2. Joint Embedding Architectures (JEA): These architectures process multiple views of an input signal through encoders, producing representations of the views. The system is trained to ensure that these representations are both informative and mutually predictable. Examples include Siamese networks, where two identical encoders share weights (Chen et al., 2020a; Chen and He, 2021; Grill et al., 2020; He et al., 2020), and methods permitting encoders to differ (Bardes et al., 2021). A primary challenge with JEA is preventing informational collapse, in which the representations contain minimal information about the inputs, thereby facilitating their mutual prediction. JEA's advantage lies in the encoders' ability to eliminate noisy, unpredictable, or irrelevant information from the input within the representation space.", "enumerated": false, "marker": "-"}
null
e0e5846f-ae2b-4399-939a-9363f704c232
2304.09355v5.pdf
text
To effectively train these architectures, it is essential to ensure that the representations of different signals are distinct. This can be achieved through either contrastive or noncontrastive methods:
null
870
75
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/39", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 88.84542846679688, "t": 498.2249755859375, "r": 523.8182983398438, "b": 460.6176452636719, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 202]}], "orig": "To effectively train these architectures, it is essential to ensure that the representations of different signals are distinct. This can be achieved through either contrastive or noncontrastive methods:", "text": "To effectively train these architectures, it is essential to ensure that the representations of different signals are distinct. This can be achieved through either contrastive or noncontrastive methods:"}
null
132c7817-11d5-4c7f-9a03-dc251b4bb726
2304.09355v5.pdf
list_item
· Contrastive Methods: Contrastive methods utilize data points from the training set as positive samples and generate points outside the region of high data density as contrastive samples . The energy (e.g., reconstruction error for generative architectures or representation predictive error for JEA) should be low for positive samples and higher for contrastive samples. Various loss functions involving the energies of pairs or sets of samples can be minimized to achieve this objective.
null
832
159
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/40", "parent": {"cref": "#/groups/2"}, "children": [], "label": "list_item", "prov": [{"page_no": 5, "bbox": {"l": 105.7928695678711, "t": 445.7911376953125, "r": 522.2193603515625, "b": 366.61651611328125, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 490]}], "orig": "\u00b7 Contrastive Methods: Contrastive methods utilize data points from the training set as positive samples and generate points outside the region of high data density as contrastive samples . The energy (e.g., reconstruction error for generative architectures or representation predictive error for JEA) should be low for positive samples and higher for contrastive samples. Various loss functions involving the energies of pairs or sets of samples can be minimized to achieve this objective.", "text": "\u00b7 Contrastive Methods: Contrastive methods utilize data points from the training set as positive samples and generate points outside the region of high data density as contrastive samples . The energy (e.g., reconstruction error for generative architectures or representation predictive error for JEA) should be low for positive samples and higher for contrastive samples. Various loss functions involving the energies of pairs or sets of samples can be minimized to achieve this objective.", "enumerated": false, "marker": "-"}
null
b7aa1139-2d13-45d2-b2f7-84901a4d7bf5
2304.09355v5.pdf
list_item
· Non-Contrastive Methods: Non-contrastive methods prevent the energy landscape's collapse by limiting the volume of space that can take low energy, either through architectural constraints or through a regularizer in the energy or training objective. In latent-variable generative architectures, preventing collapse is achieved by limiting or minimizing the information content of the latent variable. In JEA, collapse is prevented by maximizing the information content of the representations.
null
836
158
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/41", "parent": {"cref": "#/groups/2"}, "children": [], "label": "list_item", "prov": [{"page_no": 5, "bbox": {"l": 106.07862091064453, "t": 352.3957214355469, "r": 523.8255615234375, "b": 273.4095458984375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 494]}], "orig": "\u00b7 Non-Contrastive Methods: Non-contrastive methods prevent the energy landscape's collapse by limiting the volume of space that can take low energy, either through architectural constraints or through a regularizer in the energy or training objective. In latent-variable generative architectures, preventing collapse is achieved by limiting or minimizing the information content of the latent variable. In JEA, collapse is prevented by maximizing the information content of the representations.", "text": "\u00b7 Non-Contrastive Methods: Non-contrastive methods prevent the energy landscape's collapse by limiting the volume of space that can take low energy, either through architectural constraints or through a regularizer in the energy or training objective. In latent-variable generative architectures, preventing collapse is achieved by limiting or minimizing the information content of the latent variable. In JEA, collapse is prevented by maximizing the information content of the representations.", "enumerated": false, "marker": "-"}
null
2aed7c8d-402c-44e6-a193-bb9f8131058d
2304.09355v5.pdf
text
We now present a few concrete examples of popular models that employ various combinations of generative architectures, joint embedding architectures, contrastive training, and noncontrastive training:
null
870
76
72
image/png
6
application/pdf
1.0.0
[]
null
null
{"self_ref": "#/texts/42", "parent": {"cref": "#/body"}, "children": [], "label": "text", "prov": [{"page_no": 5, "bbox": {"l": 89.00202941894531, "t": 258.313232421875, "r": 523.8259887695312, "b": 220.55511474609375, "coord_origin": "BOTTOMLEFT"}, "charspan": [0, 200]}], "orig": "We now present a few concrete examples of popular models that employ various combinations of generative architectures, joint embedding architectures, contrastive training, and noncontrastive training:", "text": "We now present a few concrete examples of popular models that employ various combinations of generative architectures, joint embedding architectures, contrastive training, and noncontrastive training:"}
null