Papers
arxiv:2505.02872

Decoding Open-Ended Information Seeking Goals from Eye Movements in Reading

Published on May 4
· Submitted by scaperex on May 7
Authors:
,
,

Abstract

When reading, we often have specific information that interests us in a text. For example, you might be reading this paper because you are curious about LLMs for eye movements in reading, the experimental design, or perhaps you only care about the question ``but does it work?''. More broadly, in daily life, people approach texts with any number of text-specific goals that guide their reading behavior. In this work, we ask, for the first time, whether open-ended reading goals can be automatically decoded from eye movements in reading. To address this question, we introduce goal classification and goal reconstruction tasks and evaluation frameworks, and use large-scale eye tracking for reading data in English with hundreds of text-specific information seeking tasks. We develop and compare several discriminative and generative multimodal LLMs that combine eye movements and text for goal classification and goal reconstruction. Our experiments show considerable success on both tasks, suggesting that LLMs can extract valuable information about the readers' text-specific goals from eye movements.

Community

Paper submitter
This comment has been hidden (marked as Resolved)
Paper submitter
edited 2 days ago

👀 What are you really looking for when you read?
We don’t always read to understand everything—sometimes we have specific questions in mind. But can machines know what we're looking for, just from how our eyes move?

In our latest work, we explore whether large multimodal models can decode open-ended reading goals from eye-tracking data.

1️⃣ An example paragraph, candidate questions, and how they relate to a specific part of the text:
questions-crop.jpg

2️⃣ How we use discriminative and generative models to infer reading goals from eye movements in reading:
Tasks_Diagrams-crop.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.02872 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.02872 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.02872 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.